Test Report: KVM_Linux_crio 20317

                    
                      bb508b30435b2a744d00b2f75d06f98d338973f1:2025-01-27:38093
                    
                

Test fail (12/312)

x
+
TestAddons/parallel/Ingress (156.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-645690 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-645690 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-645690 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004879195s
I0127 12:17:22.619315  368946 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-645690 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.56619706s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-645690 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.68
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-645690 -n addons-645690
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 logs -n 25: (1.28827616s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-356622                                                                     | download-only-356622 | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC | 27 Jan 25 12:12 UTC |
	| delete  | -p download-only-484253                                                                     | download-only-484253 | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC | 27 Jan 25 12:12 UTC |
	| delete  | -p download-only-356622                                                                     | download-only-356622 | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC | 27 Jan 25 12:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-799340 | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC |                     |
	|         | binary-mirror-799340                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41381                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-799340                                                                     | binary-mirror-799340 | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC | 27 Jan 25 12:12 UTC |
	| addons  | disable dashboard -p                                                                        | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC |                     |
	|         | addons-645690                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC |                     |
	|         | addons-645690                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-645690 --wait=true                                                                | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC | 27 Jan 25 12:16 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-645690 addons disable                                                                | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-645690 addons disable                                                                | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-645690 addons                                                                        | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-645690 addons disable                                                                | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-645690 addons                                                                        | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-645690 addons                                                                        | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | -p addons-645690                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-645690 ip                                                                            | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	| addons  | addons-645690 addons disable                                                                | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-645690 ssh cat                                                                       | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:17 UTC | 27 Jan 25 12:17 UTC |
	|         | /opt/local-path-provisioner/pvc-ec5a4462-37e5-484e-bbe3-5c2da761259b_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-645690 addons disable                                                                | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:17 UTC | 27 Jan 25 12:17 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-645690 addons                                                                        | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:17 UTC | 27 Jan 25 12:17 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-645690 addons disable                                                                | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:17 UTC | 27 Jan 25 12:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-645690 addons                                                                        | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:17 UTC | 27 Jan 25 12:17 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-645690 addons                                                                        | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:17 UTC | 27 Jan 25 12:17 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-645690 ssh curl -s                                                                   | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:17 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-645690 ip                                                                            | addons-645690        | jenkins | v1.35.0 | 27 Jan 25 12:19 UTC | 27 Jan 25 12:19 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:12:36
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:12:36.071971  369702 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:12:36.072066  369702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:12:36.072075  369702 out.go:358] Setting ErrFile to fd 2...
	I0127 12:12:36.072079  369702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:12:36.072251  369702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 12:12:36.072862  369702 out.go:352] Setting JSON to false
	I0127 12:12:36.073730  369702 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":17696,"bootTime":1737962260,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:12:36.073791  369702 start.go:139] virtualization: kvm guest
	I0127 12:12:36.075666  369702 out.go:177] * [addons-645690] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:12:36.076971  369702 notify.go:220] Checking for updates...
	I0127 12:12:36.076981  369702 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:12:36.078340  369702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:12:36.079512  369702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 12:12:36.080624  369702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 12:12:36.081623  369702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:12:36.082794  369702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:12:36.084151  369702 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:12:36.116715  369702 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 12:12:36.117841  369702 start.go:297] selected driver: kvm2
	I0127 12:12:36.117851  369702 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:12:36.117862  369702 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:12:36.118519  369702 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:12:36.118640  369702 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:12:36.133173  369702 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:12:36.133227  369702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:12:36.133470  369702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:12:36.133505  369702 cni.go:84] Creating CNI manager for ""
	I0127 12:12:36.133549  369702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:12:36.133557  369702 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:12:36.133605  369702 start.go:340] cluster config:
	{Name:addons-645690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-645690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0127 12:12:36.133700  369702 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:12:36.135180  369702 out.go:177] * Starting "addons-645690" primary control-plane node in "addons-645690" cluster
	I0127 12:12:36.136169  369702 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:12:36.136197  369702 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:12:36.136206  369702 cache.go:56] Caching tarball of preloaded images
	I0127 12:12:36.136294  369702 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 12:12:36.136307  369702 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 12:12:36.136610  369702 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/config.json ...
	I0127 12:12:36.136630  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/config.json: {Name:mk1a38f3f347b3bde20967d39e7284644b5642ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:12:36.137247  369702 start.go:360] acquireMachinesLock for addons-645690: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:12:36.137311  369702 start.go:364] duration metric: took 44.59µs to acquireMachinesLock for "addons-645690"
	I0127 12:12:36.137336  369702 start.go:93] Provisioning new machine with config: &{Name:addons-645690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-645690 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:12:36.137391  369702 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 12:12:36.138779  369702 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0127 12:12:36.138902  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:12:36.138945  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:12:36.152861  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I0127 12:12:36.153442  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:12:36.154041  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:12:36.154061  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:12:36.154425  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:12:36.154591  369702 main.go:141] libmachine: (addons-645690) Calling .GetMachineName
	I0127 12:12:36.154748  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:12:36.154877  369702 start.go:159] libmachine.API.Create for "addons-645690" (driver="kvm2")
	I0127 12:12:36.154909  369702 client.go:168] LocalClient.Create starting
	I0127 12:12:36.154949  369702 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem
	I0127 12:12:36.325185  369702 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem
	I0127 12:12:36.388426  369702 main.go:141] libmachine: Running pre-create checks...
	I0127 12:12:36.388449  369702 main.go:141] libmachine: (addons-645690) Calling .PreCreateCheck
	I0127 12:12:36.388966  369702 main.go:141] libmachine: (addons-645690) Calling .GetConfigRaw
	I0127 12:12:36.389420  369702 main.go:141] libmachine: Creating machine...
	I0127 12:12:36.389436  369702 main.go:141] libmachine: (addons-645690) Calling .Create
	I0127 12:12:36.389621  369702 main.go:141] libmachine: (addons-645690) creating KVM machine...
	I0127 12:12:36.389644  369702 main.go:141] libmachine: (addons-645690) creating network...
	I0127 12:12:36.390989  369702 main.go:141] libmachine: (addons-645690) DBG | found existing default KVM network
	I0127 12:12:36.391700  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:36.391529  369725 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001151f0}
	I0127 12:12:36.391724  369702 main.go:141] libmachine: (addons-645690) DBG | created network xml: 
	I0127 12:12:36.391736  369702 main.go:141] libmachine: (addons-645690) DBG | <network>
	I0127 12:12:36.391742  369702 main.go:141] libmachine: (addons-645690) DBG |   <name>mk-addons-645690</name>
	I0127 12:12:36.391748  369702 main.go:141] libmachine: (addons-645690) DBG |   <dns enable='no'/>
	I0127 12:12:36.391753  369702 main.go:141] libmachine: (addons-645690) DBG |   
	I0127 12:12:36.391763  369702 main.go:141] libmachine: (addons-645690) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0127 12:12:36.391776  369702 main.go:141] libmachine: (addons-645690) DBG |     <dhcp>
	I0127 12:12:36.391787  369702 main.go:141] libmachine: (addons-645690) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0127 12:12:36.391795  369702 main.go:141] libmachine: (addons-645690) DBG |     </dhcp>
	I0127 12:12:36.391801  369702 main.go:141] libmachine: (addons-645690) DBG |   </ip>
	I0127 12:12:36.391809  369702 main.go:141] libmachine: (addons-645690) DBG |   
	I0127 12:12:36.391819  369702 main.go:141] libmachine: (addons-645690) DBG | </network>
	I0127 12:12:36.391828  369702 main.go:141] libmachine: (addons-645690) DBG | 
	I0127 12:12:36.397287  369702 main.go:141] libmachine: (addons-645690) DBG | trying to create private KVM network mk-addons-645690 192.168.39.0/24...
	I0127 12:12:36.463889  369702 main.go:141] libmachine: (addons-645690) DBG | private KVM network mk-addons-645690 192.168.39.0/24 created
	I0127 12:12:36.463933  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:36.463839  369725 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 12:12:36.463951  369702 main.go:141] libmachine: (addons-645690) setting up store path in /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690 ...
	I0127 12:12:36.463966  369702 main.go:141] libmachine: (addons-645690) building disk image from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:12:36.464119  369702 main.go:141] libmachine: (addons-645690) Downloading /home/jenkins/minikube-integration/20317-361578/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:12:36.771133  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:36.770971  369725 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa...
	I0127 12:12:36.830645  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:36.830493  369725 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/addons-645690.rawdisk...
	I0127 12:12:36.830676  369702 main.go:141] libmachine: (addons-645690) DBG | Writing magic tar header
	I0127 12:12:36.830686  369702 main.go:141] libmachine: (addons-645690) DBG | Writing SSH key tar header
	I0127 12:12:36.830694  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:36.830659  369725 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690 ...
	I0127 12:12:36.830825  369702 main.go:141] libmachine: (addons-645690) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690
	I0127 12:12:36.830847  369702 main.go:141] libmachine: (addons-645690) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690 (perms=drwx------)
	I0127 12:12:36.830853  369702 main.go:141] libmachine: (addons-645690) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines
	I0127 12:12:36.830865  369702 main.go:141] libmachine: (addons-645690) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 12:12:36.830875  369702 main.go:141] libmachine: (addons-645690) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578
	I0127 12:12:36.830886  369702 main.go:141] libmachine: (addons-645690) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 12:12:36.830894  369702 main.go:141] libmachine: (addons-645690) DBG | checking permissions on dir: /home/jenkins
	I0127 12:12:36.830907  369702 main.go:141] libmachine: (addons-645690) DBG | checking permissions on dir: /home
	I0127 12:12:36.830916  369702 main.go:141] libmachine: (addons-645690) DBG | skipping /home - not owner
	I0127 12:12:36.830926  369702 main.go:141] libmachine: (addons-645690) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines (perms=drwxr-xr-x)
	I0127 12:12:36.830934  369702 main.go:141] libmachine: (addons-645690) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube (perms=drwxr-xr-x)
	I0127 12:12:36.830941  369702 main.go:141] libmachine: (addons-645690) setting executable bit set on /home/jenkins/minikube-integration/20317-361578 (perms=drwxrwxr-x)
	I0127 12:12:36.830950  369702 main.go:141] libmachine: (addons-645690) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 12:12:36.830961  369702 main.go:141] libmachine: (addons-645690) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 12:12:36.830978  369702 main.go:141] libmachine: (addons-645690) creating domain...
	I0127 12:12:36.832082  369702 main.go:141] libmachine: (addons-645690) define libvirt domain using xml: 
	I0127 12:12:36.832105  369702 main.go:141] libmachine: (addons-645690) <domain type='kvm'>
	I0127 12:12:36.832114  369702 main.go:141] libmachine: (addons-645690)   <name>addons-645690</name>
	I0127 12:12:36.832125  369702 main.go:141] libmachine: (addons-645690)   <memory unit='MiB'>4000</memory>
	I0127 12:12:36.832154  369702 main.go:141] libmachine: (addons-645690)   <vcpu>2</vcpu>
	I0127 12:12:36.832196  369702 main.go:141] libmachine: (addons-645690)   <features>
	I0127 12:12:36.832209  369702 main.go:141] libmachine: (addons-645690)     <acpi/>
	I0127 12:12:36.832215  369702 main.go:141] libmachine: (addons-645690)     <apic/>
	I0127 12:12:36.832223  369702 main.go:141] libmachine: (addons-645690)     <pae/>
	I0127 12:12:36.832232  369702 main.go:141] libmachine: (addons-645690)     
	I0127 12:12:36.832243  369702 main.go:141] libmachine: (addons-645690)   </features>
	I0127 12:12:36.832256  369702 main.go:141] libmachine: (addons-645690)   <cpu mode='host-passthrough'>
	I0127 12:12:36.832268  369702 main.go:141] libmachine: (addons-645690)   
	I0127 12:12:36.832277  369702 main.go:141] libmachine: (addons-645690)   </cpu>
	I0127 12:12:36.832287  369702 main.go:141] libmachine: (addons-645690)   <os>
	I0127 12:12:36.832297  369702 main.go:141] libmachine: (addons-645690)     <type>hvm</type>
	I0127 12:12:36.832308  369702 main.go:141] libmachine: (addons-645690)     <boot dev='cdrom'/>
	I0127 12:12:36.832327  369702 main.go:141] libmachine: (addons-645690)     <boot dev='hd'/>
	I0127 12:12:36.832338  369702 main.go:141] libmachine: (addons-645690)     <bootmenu enable='no'/>
	I0127 12:12:36.832345  369702 main.go:141] libmachine: (addons-645690)   </os>
	I0127 12:12:36.832356  369702 main.go:141] libmachine: (addons-645690)   <devices>
	I0127 12:12:36.832365  369702 main.go:141] libmachine: (addons-645690)     <disk type='file' device='cdrom'>
	I0127 12:12:36.832383  369702 main.go:141] libmachine: (addons-645690)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/boot2docker.iso'/>
	I0127 12:12:36.832395  369702 main.go:141] libmachine: (addons-645690)       <target dev='hdc' bus='scsi'/>
	I0127 12:12:36.832401  369702 main.go:141] libmachine: (addons-645690)       <readonly/>
	I0127 12:12:36.832408  369702 main.go:141] libmachine: (addons-645690)     </disk>
	I0127 12:12:36.832419  369702 main.go:141] libmachine: (addons-645690)     <disk type='file' device='disk'>
	I0127 12:12:36.832431  369702 main.go:141] libmachine: (addons-645690)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 12:12:36.832445  369702 main.go:141] libmachine: (addons-645690)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/addons-645690.rawdisk'/>
	I0127 12:12:36.832457  369702 main.go:141] libmachine: (addons-645690)       <target dev='hda' bus='virtio'/>
	I0127 12:12:36.832471  369702 main.go:141] libmachine: (addons-645690)     </disk>
	I0127 12:12:36.832484  369702 main.go:141] libmachine: (addons-645690)     <interface type='network'>
	I0127 12:12:36.832493  369702 main.go:141] libmachine: (addons-645690)       <source network='mk-addons-645690'/>
	I0127 12:12:36.832506  369702 main.go:141] libmachine: (addons-645690)       <model type='virtio'/>
	I0127 12:12:36.832515  369702 main.go:141] libmachine: (addons-645690)     </interface>
	I0127 12:12:36.832524  369702 main.go:141] libmachine: (addons-645690)     <interface type='network'>
	I0127 12:12:36.832534  369702 main.go:141] libmachine: (addons-645690)       <source network='default'/>
	I0127 12:12:36.832545  369702 main.go:141] libmachine: (addons-645690)       <model type='virtio'/>
	I0127 12:12:36.832563  369702 main.go:141] libmachine: (addons-645690)     </interface>
	I0127 12:12:36.832578  369702 main.go:141] libmachine: (addons-645690)     <serial type='pty'>
	I0127 12:12:36.832590  369702 main.go:141] libmachine: (addons-645690)       <target port='0'/>
	I0127 12:12:36.832602  369702 main.go:141] libmachine: (addons-645690)     </serial>
	I0127 12:12:36.832619  369702 main.go:141] libmachine: (addons-645690)     <console type='pty'>
	I0127 12:12:36.832632  369702 main.go:141] libmachine: (addons-645690)       <target type='serial' port='0'/>
	I0127 12:12:36.832642  369702 main.go:141] libmachine: (addons-645690)     </console>
	I0127 12:12:36.832652  369702 main.go:141] libmachine: (addons-645690)     <rng model='virtio'>
	I0127 12:12:36.832668  369702 main.go:141] libmachine: (addons-645690)       <backend model='random'>/dev/random</backend>
	I0127 12:12:36.832683  369702 main.go:141] libmachine: (addons-645690)     </rng>
	I0127 12:12:36.832698  369702 main.go:141] libmachine: (addons-645690)     
	I0127 12:12:36.832709  369702 main.go:141] libmachine: (addons-645690)     
	I0127 12:12:36.832719  369702 main.go:141] libmachine: (addons-645690)   </devices>
	I0127 12:12:36.832727  369702 main.go:141] libmachine: (addons-645690) </domain>
	I0127 12:12:36.832735  369702 main.go:141] libmachine: (addons-645690) 
	I0127 12:12:36.837089  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:87:48:29 in network default
	I0127 12:12:36.837674  369702 main.go:141] libmachine: (addons-645690) starting domain...
	I0127 12:12:36.837694  369702 main.go:141] libmachine: (addons-645690) ensuring networks are active...
	I0127 12:12:36.837701  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:36.838487  369702 main.go:141] libmachine: (addons-645690) Ensuring network default is active
	I0127 12:12:36.838865  369702 main.go:141] libmachine: (addons-645690) Ensuring network mk-addons-645690 is active
	I0127 12:12:36.839365  369702 main.go:141] libmachine: (addons-645690) getting domain XML...
	I0127 12:12:36.840078  369702 main.go:141] libmachine: (addons-645690) creating domain...
	I0127 12:12:38.198197  369702 main.go:141] libmachine: (addons-645690) waiting for IP...
	I0127 12:12:38.199262  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:38.199628  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:38.199705  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:38.199648  369725 retry.go:31] will retry after 212.349549ms: waiting for domain to come up
	I0127 12:12:38.414186  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:38.414767  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:38.414817  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:38.414760  369725 retry.go:31] will retry after 267.704404ms: waiting for domain to come up
	I0127 12:12:38.684200  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:38.684604  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:38.684660  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:38.684576  369725 retry.go:31] will retry after 340.197457ms: waiting for domain to come up
	I0127 12:12:39.026074  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:39.026592  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:39.026703  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:39.026614  369725 retry.go:31] will retry after 422.800694ms: waiting for domain to come up
	I0127 12:12:39.451450  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:39.451951  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:39.451980  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:39.451906  369725 retry.go:31] will retry after 629.250365ms: waiting for domain to come up
	I0127 12:12:40.082684  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:40.083123  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:40.083147  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:40.083091  369725 retry.go:31] will retry after 952.955803ms: waiting for domain to come up
	I0127 12:12:41.037323  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:41.037834  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:41.037864  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:41.037804  369725 retry.go:31] will retry after 1.113558641s: waiting for domain to come up
	I0127 12:12:42.152803  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:42.153383  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:42.153416  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:42.153328  369725 retry.go:31] will retry after 1.23427225s: waiting for domain to come up
	I0127 12:12:43.389766  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:43.390138  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:43.390168  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:43.390098  369725 retry.go:31] will retry after 1.425627206s: waiting for domain to come up
	I0127 12:12:44.817699  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:44.818059  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:44.818091  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:44.818029  369725 retry.go:31] will retry after 1.816282641s: waiting for domain to come up
	I0127 12:12:46.635517  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:46.635990  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:46.636021  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:46.635944  369725 retry.go:31] will retry after 1.769366548s: waiting for domain to come up
	I0127 12:12:48.408141  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:48.408683  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:48.408713  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:48.408625  369725 retry.go:31] will retry after 3.465669311s: waiting for domain to come up
	I0127 12:12:51.875897  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:51.876296  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:51.876335  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:51.876250  369725 retry.go:31] will retry after 2.936142531s: waiting for domain to come up
	I0127 12:12:54.813664  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:12:54.814109  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find current IP address of domain addons-645690 in network mk-addons-645690
	I0127 12:12:54.814146  369702 main.go:141] libmachine: (addons-645690) DBG | I0127 12:12:54.814079  369725 retry.go:31] will retry after 5.656587206s: waiting for domain to come up
	I0127 12:13:00.475406  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:00.475904  369702 main.go:141] libmachine: (addons-645690) found domain IP: 192.168.39.68
	I0127 12:13:00.475925  369702 main.go:141] libmachine: (addons-645690) reserving static IP address...
	I0127 12:13:00.475934  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has current primary IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:00.476245  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find host DHCP lease matching {name: "addons-645690", mac: "52:54:00:83:72:bb", ip: "192.168.39.68"} in network mk-addons-645690
	I0127 12:13:00.547583  369702 main.go:141] libmachine: (addons-645690) reserved static IP address 192.168.39.68 for domain addons-645690
	I0127 12:13:00.547618  369702 main.go:141] libmachine: (addons-645690) DBG | Getting to WaitForSSH function...
	I0127 12:13:00.547625  369702 main.go:141] libmachine: (addons-645690) waiting for SSH...
	I0127 12:13:00.550281  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:00.550633  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690
	I0127 12:13:00.550661  369702 main.go:141] libmachine: (addons-645690) DBG | unable to find defined IP address of network mk-addons-645690 interface with MAC address 52:54:00:83:72:bb
	I0127 12:13:00.550833  369702 main.go:141] libmachine: (addons-645690) DBG | Using SSH client type: external
	I0127 12:13:00.550858  369702 main.go:141] libmachine: (addons-645690) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa (-rw-------)
	I0127 12:13:00.550890  369702 main.go:141] libmachine: (addons-645690) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:13:00.550905  369702 main.go:141] libmachine: (addons-645690) DBG | About to run SSH command:
	I0127 12:13:00.550942  369702 main.go:141] libmachine: (addons-645690) DBG | exit 0
	I0127 12:13:00.556334  369702 main.go:141] libmachine: (addons-645690) DBG | SSH cmd err, output: exit status 255: 
	I0127 12:13:00.556349  369702 main.go:141] libmachine: (addons-645690) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0127 12:13:00.556356  369702 main.go:141] libmachine: (addons-645690) DBG | command : exit 0
	I0127 12:13:00.556361  369702 main.go:141] libmachine: (addons-645690) DBG | err     : exit status 255
	I0127 12:13:00.556367  369702 main.go:141] libmachine: (addons-645690) DBG | output  : 
	I0127 12:13:03.558262  369702 main.go:141] libmachine: (addons-645690) DBG | Getting to WaitForSSH function...
	I0127 12:13:03.560694  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.561150  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:03.561179  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.561303  369702 main.go:141] libmachine: (addons-645690) DBG | Using SSH client type: external
	I0127 12:13:03.561358  369702 main.go:141] libmachine: (addons-645690) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa (-rw-------)
	I0127 12:13:03.561396  369702 main.go:141] libmachine: (addons-645690) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 12:13:03.561405  369702 main.go:141] libmachine: (addons-645690) DBG | About to run SSH command:
	I0127 12:13:03.561424  369702 main.go:141] libmachine: (addons-645690) DBG | exit 0
	I0127 12:13:03.682212  369702 main.go:141] libmachine: (addons-645690) DBG | SSH cmd err, output: <nil>: 
	I0127 12:13:03.682431  369702 main.go:141] libmachine: (addons-645690) KVM machine creation complete
	I0127 12:13:03.682738  369702 main.go:141] libmachine: (addons-645690) Calling .GetConfigRaw
	I0127 12:13:03.683295  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:03.683481  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:03.683615  369702 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 12:13:03.683634  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:03.685096  369702 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 12:13:03.685111  369702 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 12:13:03.685117  369702 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 12:13:03.685123  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:03.687341  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.687750  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:03.687786  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.687885  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:03.688080  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:03.688231  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:03.688362  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:03.688519  369702 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:03.688749  369702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0127 12:13:03.688768  369702 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 12:13:03.785624  369702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:13:03.785646  369702 main.go:141] libmachine: Detecting the provisioner...
	I0127 12:13:03.785654  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:03.788646  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.788951  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:03.788980  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.789147  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:03.789467  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:03.789646  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:03.789815  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:03.789994  369702 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:03.790212  369702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0127 12:13:03.790228  369702 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 12:13:03.891367  369702 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 12:13:03.891532  369702 main.go:141] libmachine: found compatible host: buildroot
	I0127 12:13:03.891553  369702 main.go:141] libmachine: Provisioning with buildroot...
	I0127 12:13:03.891565  369702 main.go:141] libmachine: (addons-645690) Calling .GetMachineName
	I0127 12:13:03.891842  369702 buildroot.go:166] provisioning hostname "addons-645690"
	I0127 12:13:03.891871  369702 main.go:141] libmachine: (addons-645690) Calling .GetMachineName
	I0127 12:13:03.892082  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:03.894755  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.895149  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:03.895205  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:03.895218  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:03.895408  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:03.895593  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:03.895754  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:03.895909  369702 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:03.896101  369702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0127 12:13:03.896116  369702 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-645690 && echo "addons-645690" | sudo tee /etc/hostname
	I0127 12:13:04.008436  369702 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-645690
	
	I0127 12:13:04.008464  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.011357  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.011697  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.011719  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.011885  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:04.012089  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.012223  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.012340  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:04.012466  369702 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:04.012671  369702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0127 12:13:04.012693  369702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-645690' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-645690/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-645690' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:13:04.118584  369702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:13:04.118612  369702 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 12:13:04.118631  369702 buildroot.go:174] setting up certificates
	I0127 12:13:04.118644  369702 provision.go:84] configureAuth start
	I0127 12:13:04.118653  369702 main.go:141] libmachine: (addons-645690) Calling .GetMachineName
	I0127 12:13:04.118889  369702 main.go:141] libmachine: (addons-645690) Calling .GetIP
	I0127 12:13:04.121727  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.122132  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.122158  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.122275  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.124493  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.124829  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.124857  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.124987  369702 provision.go:143] copyHostCerts
	I0127 12:13:04.125101  369702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 12:13:04.125303  369702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 12:13:04.125409  369702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 12:13:04.125497  369702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.addons-645690 san=[127.0.0.1 192.168.39.68 addons-645690 localhost minikube]
	I0127 12:13:04.178097  369702 provision.go:177] copyRemoteCerts
	I0127 12:13:04.178147  369702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:13:04.178166  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.181333  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.181659  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.181688  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.181781  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:04.181960  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.182126  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:04.182263  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:04.260739  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 12:13:04.283977  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:13:04.306193  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:13:04.328427  369702 provision.go:87] duration metric: took 209.767738ms to configureAuth
	I0127 12:13:04.328459  369702 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:13:04.328654  369702 config.go:182] Loaded profile config "addons-645690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:13:04.328760  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.331720  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.332038  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.332069  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.332225  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:04.332391  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.332543  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.332683  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:04.332874  369702 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:04.333042  369702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0127 12:13:04.333061  369702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 12:13:04.546705  369702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 12:13:04.546733  369702 main.go:141] libmachine: Checking connection to Docker...
	I0127 12:13:04.546744  369702 main.go:141] libmachine: (addons-645690) Calling .GetURL
	I0127 12:13:04.548099  369702 main.go:141] libmachine: (addons-645690) DBG | using libvirt version 6000000
	I0127 12:13:04.550151  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.550572  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.550604  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.550741  369702 main.go:141] libmachine: Docker is up and running!
	I0127 12:13:04.550757  369702 main.go:141] libmachine: Reticulating splines...
	I0127 12:13:04.550766  369702 client.go:171] duration metric: took 28.395845056s to LocalClient.Create
	I0127 12:13:04.550792  369702 start.go:167] duration metric: took 28.395915221s to libmachine.API.Create "addons-645690"
	I0127 12:13:04.550807  369702 start.go:293] postStartSetup for "addons-645690" (driver="kvm2")
	I0127 12:13:04.550820  369702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:13:04.550843  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:04.551074  369702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:13:04.551110  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.553261  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.553586  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.553612  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.553768  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:04.553950  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.554079  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:04.554186  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:04.632711  369702 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:13:04.636910  369702 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:13:04.636936  369702 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 12:13:04.637006  369702 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 12:13:04.637041  369702 start.go:296] duration metric: took 86.226843ms for postStartSetup
	I0127 12:13:04.637083  369702 main.go:141] libmachine: (addons-645690) Calling .GetConfigRaw
	I0127 12:13:04.637673  369702 main.go:141] libmachine: (addons-645690) Calling .GetIP
	I0127 12:13:04.640062  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.640429  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.640458  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.640696  369702 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/config.json ...
	I0127 12:13:04.640854  369702 start.go:128] duration metric: took 28.503452938s to createHost
	I0127 12:13:04.640877  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.643000  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.643390  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.643420  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.643593  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:04.643789  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.643932  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.644126  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:04.644261  369702 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:04.644485  369702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0127 12:13:04.644500  369702 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:13:04.742929  369702 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737979984.707565572
	
	I0127 12:13:04.742956  369702 fix.go:216] guest clock: 1737979984.707565572
	I0127 12:13:04.742966  369702 fix.go:229] Guest: 2025-01-27 12:13:04.707565572 +0000 UTC Remote: 2025-01-27 12:13:04.640866468 +0000 UTC m=+28.605269626 (delta=66.699104ms)
	I0127 12:13:04.742994  369702 fix.go:200] guest clock delta is within tolerance: 66.699104ms
	I0127 12:13:04.743000  369702 start.go:83] releasing machines lock for "addons-645690", held for 28.605675144s
	I0127 12:13:04.743045  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:04.743303  369702 main.go:141] libmachine: (addons-645690) Calling .GetIP
	I0127 12:13:04.746118  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.746492  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.746517  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.746714  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:04.747197  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:04.747381  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:04.747501  369702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 12:13:04.747567  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.747598  369702 ssh_runner.go:195] Run: cat /version.json
	I0127 12:13:04.747624  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:04.750402  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.750427  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.750813  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.750836  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:04.750868  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.750882  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:04.751049  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:04.751050  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:04.751227  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.751237  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:04.751460  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:04.751476  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:04.751677  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:04.751664  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:04.856311  369702 ssh_runner.go:195] Run: systemctl --version
	I0127 12:13:04.862171  369702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 12:13:05.015817  369702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 12:13:05.021967  369702 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:13:05.022037  369702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:13:05.037752  369702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:13:05.037774  369702 start.go:495] detecting cgroup driver to use...
	I0127 12:13:05.037848  369702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:13:05.054190  369702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:13:05.068208  369702 docker.go:217] disabling cri-docker service (if available) ...
	I0127 12:13:05.068255  369702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 12:13:05.081663  369702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 12:13:05.094951  369702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 12:13:05.216972  369702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 12:13:05.375719  369702 docker.go:233] disabling docker service ...
	I0127 12:13:05.375787  369702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 12:13:05.390514  369702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 12:13:05.403067  369702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 12:13:05.522374  369702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 12:13:05.641369  369702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 12:13:05.655300  369702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:13:05.673458  369702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 12:13:05.673520  369702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:13:05.683498  369702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 12:13:05.683548  369702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:13:05.693325  369702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:13:05.703049  369702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:13:05.712776  369702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:13:05.722937  369702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:13:05.732661  369702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:13:05.749600  369702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 12:13:05.759507  369702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:13:05.768720  369702 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:13:05.768770  369702 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:13:05.781770  369702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:13:05.791280  369702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:13:05.908823  369702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 12:13:05.995210  369702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 12:13:05.995296  369702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 12:13:06.000333  369702 start.go:563] Will wait 60s for crictl version
	I0127 12:13:06.000389  369702 ssh_runner.go:195] Run: which crictl
	I0127 12:13:06.004362  369702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:13:06.043084  369702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 12:13:06.043193  369702 ssh_runner.go:195] Run: crio --version
	I0127 12:13:06.071416  369702 ssh_runner.go:195] Run: crio --version
	I0127 12:13:06.097553  369702 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 12:13:06.099235  369702 main.go:141] libmachine: (addons-645690) Calling .GetIP
	I0127 12:13:06.101918  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:06.102255  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:06.102278  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:06.102470  369702 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 12:13:06.106835  369702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:13:06.119158  369702 kubeadm.go:883] updating cluster {Name:addons-645690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-645690 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:13:06.119277  369702 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:13:06.119364  369702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:13:06.151264  369702 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 12:13:06.151326  369702 ssh_runner.go:195] Run: which lz4
	I0127 12:13:06.155385  369702 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:13:06.159412  369702 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:13:06.159442  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 12:13:07.466468  369702 crio.go:462] duration metric: took 1.311104169s to copy over tarball
	I0127 12:13:07.466570  369702 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:13:09.593770  369702 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127157049s)
	I0127 12:13:09.593816  369702 crio.go:469] duration metric: took 2.12731729s to extract the tarball
	I0127 12:13:09.593827  369702 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:13:09.630937  369702 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 12:13:09.672965  369702 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 12:13:09.672991  369702 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:13:09.672999  369702 kubeadm.go:934] updating node { 192.168.39.68 8443 v1.32.1 crio true true} ...
	I0127 12:13:09.673104  369702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-645690 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-645690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:13:09.673167  369702 ssh_runner.go:195] Run: crio config
	I0127 12:13:09.720203  369702 cni.go:84] Creating CNI manager for ""
	I0127 12:13:09.720226  369702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:13:09.720238  369702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:13:09.720262  369702 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-645690 NodeName:addons-645690 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:13:09.720412  369702 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-645690"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.68"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:13:09.720484  369702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:13:09.730516  369702 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:13:09.730618  369702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:13:09.739989  369702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0127 12:13:09.756021  369702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:13:09.771646  369702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0127 12:13:09.787346  369702 ssh_runner.go:195] Run: grep 192.168.39.68	control-plane.minikube.internal$ /etc/hosts
	I0127 12:13:09.791299  369702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:13:09.803232  369702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:13:09.918635  369702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:13:09.936148  369702 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690 for IP: 192.168.39.68
	I0127 12:13:09.936172  369702 certs.go:194] generating shared ca certs ...
	I0127 12:13:09.936189  369702 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:09.936343  369702 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 12:13:10.195564  369702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt ...
	I0127 12:13:10.195597  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt: {Name:mkfb31a7271ea775c2331d15190bd081e9225c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.195773  369702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key ...
	I0127 12:13:10.195784  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key: {Name:mk7b3f7543656242e9d247f27cf36ae0b0235511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.196649  369702 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 12:13:10.341340  369702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt ...
	I0127 12:13:10.341374  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt: {Name:mkb1d3711a7a68b4a0f3cf04138b9b17ba5d2873 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.341524  369702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key ...
	I0127 12:13:10.341535  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key: {Name:mk43099635d70aa8e28a00a5b53907f713285040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.342244  369702 certs.go:256] generating profile certs ...
	I0127 12:13:10.342313  369702 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.key
	I0127 12:13:10.342332  369702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt with IP's: []
	I0127 12:13:10.429638  369702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt ...
	I0127 12:13:10.429670  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: {Name:mk4b2120adf6895019ac306319d108c20da98f43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.429827  369702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.key ...
	I0127 12:13:10.429838  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.key: {Name:mk175887502c37583881f0b2bbb7a9e739835f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.430595  369702 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.key.f748c68e
	I0127 12:13:10.430616  369702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.crt.f748c68e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.68]
	I0127 12:13:10.578610  369702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.crt.f748c68e ...
	I0127 12:13:10.578642  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.crt.f748c68e: {Name:mka9c65276aeac126fef2e2f1ed94784d1331241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.578846  369702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.key.f748c68e ...
	I0127 12:13:10.578861  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.key.f748c68e: {Name:mk3f7355cb728eafe0e64625a121bb3118ff6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.578939  369702 certs.go:381] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.crt.f748c68e -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.crt
	I0127 12:13:10.579009  369702 certs.go:385] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.key.f748c68e -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.key
	I0127 12:13:10.579054  369702 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.key
	I0127 12:13:10.579073  369702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.crt with IP's: []
	I0127 12:13:10.756219  369702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.crt ...
	I0127 12:13:10.756251  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.crt: {Name:mk6f4c9df0f1a38d1d00cb938c2c85f0858f7c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.761722  369702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.key ...
	I0127 12:13:10.761742  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.key: {Name:mk0fb49b109df44f261d5ffaa61403e524e07193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:10.762088  369702 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 12:13:10.762145  369702 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 12:13:10.762180  369702 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 12:13:10.762207  369702 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 12:13:10.762868  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:13:10.791761  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:13:10.829273  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:13:10.853305  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:13:10.876387  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 12:13:10.898835  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:13:10.921008  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:13:10.943576  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:13:10.965700  369702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:13:10.988687  369702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:13:11.011197  369702 ssh_runner.go:195] Run: openssl version
	I0127 12:13:11.017287  369702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:13:11.029219  369702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:13:11.034311  369702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:13:11.034378  369702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:13:11.040467  369702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:13:11.051545  369702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:13:11.055690  369702 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:13:11.055752  369702 kubeadm.go:392] StartCluster: {Name:addons-645690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-645690 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:13:11.055857  369702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 12:13:11.055906  369702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 12:13:11.091732  369702 cri.go:89] found id: ""
	I0127 12:13:11.091818  369702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:13:11.102076  369702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:13:11.116342  369702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:13:11.126152  369702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:13:11.126174  369702 kubeadm.go:157] found existing configuration files:
	
	I0127 12:13:11.126231  369702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:13:11.135004  369702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:13:11.135066  369702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:13:11.144026  369702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:13:11.152473  369702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:13:11.152522  369702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:13:11.161664  369702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:13:11.170213  369702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:13:11.170264  369702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:13:11.179069  369702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:13:11.187558  369702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:13:11.187613  369702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:13:11.196998  369702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:13:11.258445  369702 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:13:11.258504  369702 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:13:11.387043  369702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:13:11.387148  369702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:13:11.387233  369702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:13:11.398948  369702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:13:11.419700  369702 out.go:235]   - Generating certificates and keys ...
	I0127 12:13:11.419809  369702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:13:11.419953  369702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:13:11.584759  369702 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:13:11.747515  369702 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:13:12.074879  369702 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:13:12.246074  369702 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:13:12.322324  369702 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:13:12.322485  369702 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-645690 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0127 12:13:12.363827  369702 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:13:12.363985  369702 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-645690 localhost] and IPs [192.168.39.68 127.0.0.1 ::1]
	I0127 12:13:12.481416  369702 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:13:12.543563  369702 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:13:12.604882  369702 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:13:12.604994  369702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:13:13.071943  369702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:13:13.171688  369702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:13:13.284425  369702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:13:13.578041  369702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:13:13.914668  369702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:13:13.915200  369702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:13:13.917595  369702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:13:13.919603  369702 out.go:235]   - Booting up control plane ...
	I0127 12:13:13.919709  369702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:13:13.919811  369702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:13:13.921686  369702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:13:13.936552  369702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:13:13.942663  369702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:13:13.942708  369702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:13:14.073398  369702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:13:14.073505  369702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:13:14.574506  369702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.810434ms
	I0127 12:13:14.574649  369702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:13:19.574225  369702 kubeadm.go:310] [api-check] The API server is healthy after 5.001188328s
	I0127 12:13:19.588758  369702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:13:19.605350  369702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:13:19.629752  369702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:13:19.630016  369702 kubeadm.go:310] [mark-control-plane] Marking the node addons-645690 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:13:19.645386  369702 kubeadm.go:310] [bootstrap-token] Using token: iitu0j.x1c3wwkw4m8zuxdo
	I0127 12:13:19.646672  369702 out.go:235]   - Configuring RBAC rules ...
	I0127 12:13:19.646800  369702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:13:19.651623  369702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:13:19.665770  369702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:13:19.669228  369702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:13:19.672677  369702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:13:19.676399  369702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:13:20.348882  369702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:13:20.570970  369702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:13:20.980759  369702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:13:20.980811  369702 kubeadm.go:310] 
	I0127 12:13:20.980896  369702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:13:20.980906  369702 kubeadm.go:310] 
	I0127 12:13:20.981003  369702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:13:20.981022  369702 kubeadm.go:310] 
	I0127 12:13:20.981052  369702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:13:20.981165  369702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:13:20.981227  369702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:13:20.981234  369702 kubeadm.go:310] 
	I0127 12:13:20.981275  369702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:13:20.981281  369702 kubeadm.go:310] 
	I0127 12:13:20.981322  369702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:13:20.981336  369702 kubeadm.go:310] 
	I0127 12:13:20.981417  369702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:13:20.981522  369702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:13:20.981634  369702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:13:20.981645  369702 kubeadm.go:310] 
	I0127 12:13:20.981772  369702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:13:20.981883  369702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:13:20.981900  369702 kubeadm.go:310] 
	I0127 12:13:20.982000  369702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iitu0j.x1c3wwkw4m8zuxdo \
	I0127 12:13:20.982113  369702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 12:13:20.982145  369702 kubeadm.go:310] 	--control-plane 
	I0127 12:13:20.982161  369702 kubeadm.go:310] 
	I0127 12:13:20.982291  369702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:13:20.982305  369702 kubeadm.go:310] 
	I0127 12:13:20.982420  369702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iitu0j.x1c3wwkw4m8zuxdo \
	I0127 12:13:20.982583  369702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 12:13:20.983038  369702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:13:20.983075  369702 cni.go:84] Creating CNI manager for ""
	I0127 12:13:20.983086  369702 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:13:20.984871  369702 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 12:13:20.986185  369702 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 12:13:21.000405  369702 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 12:13:21.018255  369702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:13:21.018365  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:21.018419  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-645690 minikube.k8s.io/updated_at=2025_01_27T12_13_21_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=addons-645690 minikube.k8s.io/primary=true
	I0127 12:13:21.040368  369702 ops.go:34] apiserver oom_adj: -16
	I0127 12:13:21.178671  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:21.679374  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:22.179060  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:22.679184  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:23.178807  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:23.679566  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:24.179080  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:24.679449  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:25.179751  369702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:13:25.275901  369702 kubeadm.go:1113] duration metric: took 4.257608963s to wait for elevateKubeSystemPrivileges
	I0127 12:13:25.275954  369702 kubeadm.go:394] duration metric: took 14.220207481s to StartCluster
	I0127 12:13:25.275984  369702 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:25.276140  369702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 12:13:25.276704  369702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:13:25.276891  369702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:13:25.276923  369702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 12:13:25.277039  369702 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0127 12:13:25.277190  369702 addons.go:69] Setting yakd=true in profile "addons-645690"
	I0127 12:13:25.277213  369702 addons.go:69] Setting gcp-auth=true in profile "addons-645690"
	I0127 12:13:25.277224  369702 config.go:182] Loaded profile config "addons-645690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:13:25.277236  369702 addons.go:69] Setting storage-provisioner=true in profile "addons-645690"
	I0127 12:13:25.277230  369702 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-645690"
	I0127 12:13:25.277237  369702 addons.go:69] Setting default-storageclass=true in profile "addons-645690"
	I0127 12:13:25.277257  369702 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-645690"
	I0127 12:13:25.277269  369702 addons.go:238] Setting addon storage-provisioner=true in "addons-645690"
	I0127 12:13:25.277271  369702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-645690"
	I0127 12:13:25.277296  369702 addons.go:69] Setting volcano=true in profile "addons-645690"
	I0127 12:13:25.277298  369702 addons.go:69] Setting volumesnapshots=true in profile "addons-645690"
	I0127 12:13:25.277299  369702 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-645690"
	I0127 12:13:25.277308  369702 addons.go:238] Setting addon volcano=true in "addons-645690"
	I0127 12:13:25.277314  369702 addons.go:238] Setting addon volumesnapshots=true in "addons-645690"
	I0127 12:13:25.277320  369702 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-645690"
	I0127 12:13:25.277330  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277348  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277217  369702 addons.go:238] Setting addon yakd=true in "addons-645690"
	I0127 12:13:25.277387  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277308  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277772  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.277794  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.277278  369702 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-645690"
	I0127 12:13:25.277824  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277827  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.277841  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.277862  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.277225  369702 addons.go:69] Setting ingress-dns=true in profile "addons-645690"
	I0127 12:13:25.277875  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.277350  369702 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-645690"
	I0127 12:13:25.277841  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.277925  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.277247  369702 mustload.go:65] Loading cluster: addons-645690
	I0127 12:13:25.277932  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277772  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.277996  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.277279  369702 addons.go:69] Setting inspektor-gadget=true in profile "addons-645690"
	I0127 12:13:25.278084  369702 addons.go:238] Setting addon inspektor-gadget=true in "addons-645690"
	I0127 12:13:25.277196  369702 addons.go:69] Setting ingress=true in profile "addons-645690"
	I0127 12:13:25.278141  369702 addons.go:238] Setting addon ingress=true in "addons-645690"
	I0127 12:13:25.278155  369702 config.go:182] Loaded profile config "addons-645690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:13:25.278178  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277288  369702 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-645690"
	I0127 12:13:25.278284  369702 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-645690"
	I0127 12:13:25.278306  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.278341  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.278381  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.277256  369702 addons.go:69] Setting registry=true in profile "addons-645690"
	I0127 12:13:25.278439  369702 addons.go:238] Setting addon registry=true in "addons-645690"
	I0127 12:13:25.278467  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.278190  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.277288  369702 addons.go:69] Setting cloud-spanner=true in profile "addons-645690"
	I0127 12:13:25.278520  369702 addons.go:238] Setting addon cloud-spanner=true in "addons-645690"
	I0127 12:13:25.277887  369702 addons.go:238] Setting addon ingress-dns=true in "addons-645690"
	I0127 12:13:25.278528  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.278126  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.277794  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.278597  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.278598  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.278638  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.277283  369702 addons.go:69] Setting metrics-server=true in profile "addons-645690"
	I0127 12:13:25.278665  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.278670  369702 addons.go:238] Setting addon metrics-server=true in "addons-645690"
	I0127 12:13:25.278689  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.278713  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.279002  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.279040  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.279186  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.279196  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.283084  369702 out.go:177] * Verifying Kubernetes components...
	I0127 12:13:25.284542  369702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:13:25.294595  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37159
	I0127 12:13:25.295027  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.295746  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.295770  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.296152  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.297795  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37863
	I0127 12:13:25.306878  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.306926  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.306955  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.306985  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.307005  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.307015  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.307457  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.307502  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.307650  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0127 12:13:25.308149  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.308181  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.308486  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.308528  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.318126  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0127 12:13:25.318631  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.319245  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.319278  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.319297  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.319581  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.319678  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.319785  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.319811  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.320087  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.320286  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.320563  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.320586  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.320653  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.321268  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.321854  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.321903  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.325034  369702 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-645690"
	I0127 12:13:25.325090  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.325448  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.325480  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.327187  369702 addons.go:238] Setting addon default-storageclass=true in "addons-645690"
	I0127 12:13:25.327227  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.327562  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.327605  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.347169  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0127 12:13:25.347947  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.348078  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35549
	I0127 12:13:25.349349  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37313
	I0127 12:13:25.350262  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0127 12:13:25.350751  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.350771  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.350855  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.351434  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.351450  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.351526  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.351589  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0127 12:13:25.352049  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.352149  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.352199  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.352837  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.352883  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.353130  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.353564  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.353596  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.353896  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.353913  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.354702  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.354866  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.354887  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.355945  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.355988  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.363162  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.363234  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.363247  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.363544  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I0127 12:13:25.363701  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
	I0127 12:13:25.363816  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.363988  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0127 12:13:25.363989  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.364114  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.364298  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0127 12:13:25.364507  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0127 12:13:25.364688  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33227
	I0127 12:13:25.365133  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.365180  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.365240  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.365388  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0127 12:13:25.366695  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.366843  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.366927  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.366999  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.367157  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.367169  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.367251  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.368775  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I0127 12:13:25.369012  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.369032  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.369369  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.369383  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.370165  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.370220  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.370270  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.370431  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.370444  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.371002  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.371028  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.371281  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0127 12:13:25.371969  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.372007  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.372321  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.372345  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.372470  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.372533  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.372568  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I0127 12:13:25.372711  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.372761  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.373579  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.373620  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.374278  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.374289  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I0127 12:13:25.374365  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.374811  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.375212  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.375246  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.375383  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.375413  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.375703  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.375766  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.375929  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.376467  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.376497  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.376514  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.377667  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:25.378412  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.378756  369702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:13:25.378975  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.379018  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.379992  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0127 12:13:25.380346  369702 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:13:25.380364  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:13:25.380396  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.381571  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0127 12:13:25.384502  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0127 12:13:25.384761  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.384861  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.384890  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.384933  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.384965  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.385128  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.385315  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.386895  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0127 12:13:25.388134  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0127 12:13:25.389402  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0127 12:13:25.390608  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0127 12:13:25.391891  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0127 12:13:25.393251  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0127 12:13:25.394136  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I0127 12:13:25.394252  369702 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0127 12:13:25.394277  369702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0127 12:13:25.394305  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.394515  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41603
	I0127 12:13:25.398269  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.398318  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.400208  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.400256  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.400696  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.400837  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.400970  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I0127 12:13:25.401085  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I0127 12:13:25.401187  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.401253  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.401276  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.401303  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.401568  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.401589  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.401648  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.401778  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.401853  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.401935  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.401982  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.402414  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.402434  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.402501  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.404353  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.404485  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.404540  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.404620  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.404684  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.404899  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.404912  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.405076  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.405089  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.405277  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.405290  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.405847  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.405914  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.405988  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.406055  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.406806  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.406880  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.406890  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.406896  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.406903  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.406918  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.406964  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.406970  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.408135  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.408204  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.408257  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.408591  369702 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0127 12:13:25.409024  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.409069  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.409139  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.409393  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.409416  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.409904  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.410107  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.410161  369702 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 12:13:25.410176  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0127 12:13:25.410194  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.410671  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.410950  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.411481  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:25.411529  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:25.411626  369702 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0127 12:13:25.412421  369702 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0127 12:13:25.412604  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.413212  369702 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 12:13:25.413232  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0127 12:13:25.413252  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.413984  369702 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 12:13:25.414001  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0127 12:13:25.414019  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.414249  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.414527  369702 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0127 12:13:25.415048  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0127 12:13:25.415630  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.415940  369702 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0127 12:13:25.416003  369702 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0127 12:13:25.416013  369702 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0127 12:13:25.416031  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.416080  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.416103  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.416451  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.416850  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.417033  369702 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0127 12:13:25.417046  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0127 12:13:25.417124  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.417184  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.417391  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.417416  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.417698  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.418278  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.418510  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.418851  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.420735  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.421360  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I0127 12:13:25.421519  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.421545  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.421690  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.421725  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.421892  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.422520  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.422566  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.422628  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.422640  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.422665  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.422675  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.422700  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.422755  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.422798  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.422832  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.422866  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.423024  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.423071  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.423111  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.423286  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.423338  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.423375  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.423633  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.423845  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.423898  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.424213  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.424455  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.424471  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.424574  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.424634  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.424809  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.424968  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.425134  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.425752  369702 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0127 12:13:25.426647  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0127 12:13:25.426650  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.426994  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.427516  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.427542  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.427917  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.428185  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.428235  369702 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0127 12:13:25.428297  369702 out.go:177]   - Using image docker.io/busybox:stable
	I0127 12:13:25.429569  369702 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 12:13:25.429589  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0127 12:13:25.429606  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.429746  369702 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0127 12:13:25.429761  369702 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0127 12:13:25.429777  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.430203  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.432339  369702 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0127 12:13:25.433302  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.433913  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.433940  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.433942  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.434156  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.434197  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.434423  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.434680  369702 out.go:177]   - Using image docker.io/registry:2.8.3
	I0127 12:13:25.434760  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.434780  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.434950  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.435015  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.435046  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.435127  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.435263  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.435857  369702 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0127 12:13:25.435880  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0127 12:13:25.435896  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.436116  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41993
	I0127 12:13:25.436917  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.437975  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.438005  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.438622  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.438960  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.439004  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.439435  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.439504  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.439678  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.439842  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.440096  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.440283  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.441028  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.441248  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:25.441274  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:25.441455  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:25.441470  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:25.441479  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:25.441486  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:25.441640  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:25.441655  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 12:13:25.441733  369702 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0127 12:13:25.445429  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0127 12:13:25.445883  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.446376  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.446397  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.446762  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.447074  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.447811  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0127 12:13:25.447942  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0127 12:13:25.448582  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.449043  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.449161  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.449182  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.449606  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.449753  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.449776  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.449891  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
	I0127 12:13:25.450034  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.450158  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.450314  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.450388  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.451437  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.451457  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.451842  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.452122  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.452283  369702 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:13:25.452298  369702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:13:25.452317  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.453328  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.453507  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38447
	I0127 12:13:25.453757  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.453928  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:25.454007  369702 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0127 12:13:25.454735  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:25.454774  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:25.455199  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:25.455439  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:25.455459  369702 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 12:13:25.455483  369702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 12:13:25.455506  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.455713  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.457004  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.457251  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:25.457339  369702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0127 12:13:25.458160  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.458189  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.458340  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.458452  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.458523  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.458644  369702 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0127 12:13:25.458650  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.458667  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.459225  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.459256  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.459373  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	W0127 12:13:25.459498  369702 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0127 12:13:25.459531  369702 retry.go:31] will retry after 228.79853ms: ssh: handshake failed: EOF
	I0127 12:13:25.459553  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.459694  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.459856  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.460367  369702 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0127 12:13:25.460393  369702 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0127 12:13:25.460412  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.460365  369702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 12:13:25.461827  369702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 12:13:25.463263  369702 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 12:13:25.463292  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0127 12:13:25.463311  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:25.463394  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.463519  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.463549  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.463684  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.463861  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.463983  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.464119  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.465914  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.466310  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:25.466337  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:25.466448  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:25.466641  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:25.466769  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:25.466872  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:25.644141  369702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:13:25.644187  369702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:13:25.834237  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:13:25.890931  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0127 12:13:25.927707  369702 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0127 12:13:25.927730  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0127 12:13:25.949589  369702 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0127 12:13:25.949622  369702 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0127 12:13:25.966908  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 12:13:25.980477  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0127 12:13:26.020254  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 12:13:26.028614  369702 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0127 12:13:26.028637  369702 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0127 12:13:26.039562  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0127 12:13:26.061382  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0127 12:13:26.066131  369702 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0127 12:13:26.066158  369702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0127 12:13:26.089217  369702 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0127 12:13:26.089245  369702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0127 12:13:26.111897  369702 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 12:13:26.111919  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0127 12:13:26.152169  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0127 12:13:26.157777  369702 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0127 12:13:26.157802  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0127 12:13:26.192295  369702 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0127 12:13:26.192328  369702 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0127 12:13:26.223529  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:13:26.267328  369702 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 12:13:26.267355  369702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 12:13:26.268731  369702 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0127 12:13:26.268758  369702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0127 12:13:26.285191  369702 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0127 12:13:26.285218  369702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0127 12:13:26.301969  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0127 12:13:26.331290  369702 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0127 12:13:26.331328  369702 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0127 12:13:26.453513  369702 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:13:26.453545  369702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 12:13:26.530117  369702 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0127 12:13:26.530150  369702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0127 12:13:26.561075  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 12:13:26.597104  369702 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0127 12:13:26.597142  369702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0127 12:13:26.609594  369702 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0127 12:13:26.609614  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0127 12:13:26.787380  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0127 12:13:26.826972  369702 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0127 12:13:26.827007  369702 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0127 12:13:26.934880  369702 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0127 12:13:26.934911  369702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0127 12:13:27.152950  369702 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 12:13:27.152975  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0127 12:13:27.266597  369702 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0127 12:13:27.266621  369702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0127 12:13:27.493614  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 12:13:27.634461  369702 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0127 12:13:27.634487  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0127 12:13:27.830727  369702 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.186537063s)
	I0127 12:13:27.830763  369702 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.186538298s)
	I0127 12:13:27.830799  369702 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0127 12:13:27.831777  369702 node_ready.go:35] waiting up to 6m0s for node "addons-645690" to be "Ready" ...
	I0127 12:13:27.835178  369702 node_ready.go:49] node "addons-645690" has status "Ready":"True"
	I0127 12:13:27.835203  369702 node_ready.go:38] duration metric: took 3.382362ms for node "addons-645690" to be "Ready" ...
	I0127 12:13:27.835214  369702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:13:27.845132  369702 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:28.068426  369702 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0127 12:13:28.068462  369702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0127 12:13:28.114475  369702 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0127 12:13:28.114501  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0127 12:13:28.303723  369702 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0127 12:13:28.303754  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0127 12:13:28.333768  369702 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-645690" context rescaled to 1 replicas
	I0127 12:13:28.661683  369702 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 12:13:28.661717  369702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0127 12:13:28.958683  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0127 12:13:29.876119  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:30.128375  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.294096976s)
	I0127 12:13:30.128435  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:30.128446  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:30.128822  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:30.128846  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:30.128864  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:30.128873  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:30.129129  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:30.129153  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:30.129165  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:30.592786  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.701806652s)
	I0127 12:13:30.592849  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:30.592864  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:30.593183  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:30.593263  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:30.593279  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:30.593291  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:30.593304  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:30.593535  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:30.593565  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:30.593574  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:30.688451  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:30.688473  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:30.688755  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:30.688814  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:30.688832  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:31.895195  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:32.256423  369702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0127 12:13:32.256480  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:32.259174  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:32.259680  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:32.259712  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:32.259885  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:32.260098  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:32.260282  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:32.260430  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:32.870343  369702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0127 12:13:33.010529  369702 addons.go:238] Setting addon gcp-auth=true in "addons-645690"
	I0127 12:13:33.010618  369702 host.go:66] Checking if "addons-645690" exists ...
	I0127 12:13:33.011108  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:33.011160  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:33.027539  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36841
	I0127 12:13:33.028005  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:33.028671  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:33.028697  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:33.029050  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:33.029584  369702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:13:33.029612  369702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:13:33.045297  369702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I0127 12:13:33.045832  369702 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:13:33.046313  369702 main.go:141] libmachine: Using API Version  1
	I0127 12:13:33.046335  369702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:13:33.046782  369702 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:13:33.046992  369702 main.go:141] libmachine: (addons-645690) Calling .GetState
	I0127 12:13:33.048654  369702 main.go:141] libmachine: (addons-645690) Calling .DriverName
	I0127 12:13:33.048871  369702 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0127 12:13:33.048895  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHHostname
	I0127 12:13:33.051358  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:33.051773  369702 main.go:141] libmachine: (addons-645690) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:72:bb", ip: ""} in network mk-addons-645690: {Iface:virbr1 ExpiryTime:2025-01-27 13:12:51 +0000 UTC Type:0 Mac:52:54:00:83:72:bb Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:addons-645690 Clientid:01:52:54:00:83:72:bb}
	I0127 12:13:33.051804  369702 main.go:141] libmachine: (addons-645690) DBG | domain addons-645690 has defined IP address 192.168.39.68 and MAC address 52:54:00:83:72:bb in network mk-addons-645690
	I0127 12:13:33.051940  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHPort
	I0127 12:13:33.052133  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHKeyPath
	I0127 12:13:33.052298  369702 main.go:141] libmachine: (addons-645690) Calling .GetSSHUsername
	I0127 12:13:33.052485  369702 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/addons-645690/id_rsa Username:docker}
	I0127 12:13:33.999891  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.032942322s)
	I0127 12:13:33.999944  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:33.999956  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:33.999987  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.019473445s)
	I0127 12:13:34.000039  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000057  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000081  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.979793877s)
	I0127 12:13:34.000107  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000123  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000178  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.960589142s)
	I0127 12:13:34.000199  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000214  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000245  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.000283  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.938861971s)
	I0127 12:13:34.000297  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.000304  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.000310  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.000312  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000320  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000327  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000328  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000395  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.848200155s)
	I0127 12:13:34.000290  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.000413  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.000414  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000423  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000427  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.000435  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000437  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.000446  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000455  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.776902148s)
	I0127 12:13:34.000452  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000472  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000480  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000556  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.698559441s)
	I0127 12:13:34.000572  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.000580  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000423  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.000859  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.000877  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.000884  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.000891  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.000896  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.000905  369702 addons.go:479] Verifying addon ingress=true in "addons-645690"
	I0127 12:13:34.000926  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.000934  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.001105  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.001117  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.001126  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.001133  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.001215  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.001242  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.001249  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.001257  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.001264  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.002407  369702 out.go:177] * Verifying ingress addon...
	I0127 12:13:34.002594  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.002613  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.002626  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.002648  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.002654  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.002828  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.002836  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.004709  369702 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0127 12:13:34.005165  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.005197  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.005203  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.005210  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.005217  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.005414  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.005428  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.005437  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.005445  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.005538  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.005548  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.005614  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.005620  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.005627  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.005633  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.005682  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.005819  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.444701011s)
	I0127 12:13:34.005847  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.005857  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.005863  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.005891  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.005897  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.005899  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.005906  369702 addons.go:479] Verifying addon registry=true in "addons-645690"
	I0127 12:13:34.005949  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.218538202s)
	I0127 12:13:34.006072  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.51242334s)
	I0127 12:13:34.005908  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	W0127 12:13:34.006113  369702 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 12:13:34.006075  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.006140  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.006141  369702 retry.go:31] will retry after 330.661259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0127 12:13:34.005966  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.006186  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.005824  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.006381  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.006392  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.006414  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.006426  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.006400  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.006724  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.006734  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.006743  369702 addons.go:479] Verifying addon metrics-server=true in "addons-645690"
	I0127 12:13:34.007553  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.007568  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.007577  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.007585  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.007847  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.007879  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.007886  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.008154  369702 out.go:177] * Verifying registry addon...
	I0127 12:13:34.009193  369702 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-645690 service yakd-dashboard -n yakd-dashboard
	
	I0127 12:13:34.010131  369702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0127 12:13:34.017502  369702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0127 12:13:34.017517  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:34.017886  369702 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0127 12:13:34.017904  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:34.052399  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.052429  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.052752  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.052829  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.052851  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.337733  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0127 12:13:34.355420  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:34.517509  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:34.521722  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:34.690683  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.731919475s)
	I0127 12:13:34.690700  369702 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.64180062s)
	I0127 12:13:34.690749  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.690854  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.691138  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:34.691186  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.691202  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.691215  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:34.691225  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:34.691444  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:34.691459  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:34.691471  369702 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-645690"
	I0127 12:13:34.692299  369702 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0127 12:13:34.693301  369702 out.go:177] * Verifying csi-hostpath-driver addon...
	I0127 12:13:34.694720  369702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0127 12:13:34.695988  369702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0127 12:13:34.696084  369702 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0127 12:13:34.696101  369702 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0127 12:13:34.712498  369702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0127 12:13:34.712519  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:34.743166  369702 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0127 12:13:34.743199  369702 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0127 12:13:34.761742  369702 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 12:13:34.761763  369702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0127 12:13:34.777808  369702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0127 12:13:35.015920  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:35.020595  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:35.201973  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:35.509065  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:35.514066  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:35.643199  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.305414974s)
	I0127 12:13:35.643257  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:35.643276  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:35.643574  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:35.643597  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:35.643614  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:35.643624  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:35.643857  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:35.643877  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:35.643903  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:35.733020  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:35.783842  369702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.005990019s)
	I0127 12:13:35.783907  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:35.783928  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:35.784279  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:35.784310  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:35.784348  369702 main.go:141] libmachine: (addons-645690) DBG | Closing plugin on server side
	I0127 12:13:35.784353  369702 main.go:141] libmachine: Making call to close driver server
	I0127 12:13:35.784441  369702 main.go:141] libmachine: (addons-645690) Calling .Close
	I0127 12:13:35.784681  369702 main.go:141] libmachine: Successfully made call to close driver server
	I0127 12:13:35.784697  369702 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 12:13:35.785606  369702 addons.go:479] Verifying addon gcp-auth=true in "addons-645690"
	I0127 12:13:35.788071  369702 out.go:177] * Verifying gcp-auth addon...
	I0127 12:13:35.789943  369702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0127 12:13:35.858616  369702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0127 12:13:35.858644  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:36.025571  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:36.028253  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:36.200596  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:36.295474  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:36.358676  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:36.511437  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:36.517092  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:36.700420  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:36.793951  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:37.008605  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:37.013567  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:37.200797  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:37.294049  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:37.514474  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:37.514783  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:37.700746  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:37.794370  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:38.008721  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:38.013480  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:38.200215  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:38.300173  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:38.508827  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:38.513462  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:38.700620  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:38.800578  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:38.851395  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:39.009136  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:39.013633  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:39.200341  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:39.293653  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:39.509231  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:39.513495  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:39.700860  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:39.793070  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:40.011857  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:40.015549  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:40.200668  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:40.298228  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:40.509324  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:40.513128  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:40.701318  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:40.794119  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:41.008743  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:41.014433  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:41.201579  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:41.293044  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:41.351348  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:41.508586  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:41.513634  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:41.700938  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:41.793843  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:42.008553  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:42.013117  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:42.201996  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:42.301481  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:42.508872  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:42.513624  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:42.700153  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:42.793887  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:43.008659  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:43.014208  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:43.201132  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:43.293774  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:43.508537  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:43.513783  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:43.700740  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:43.794278  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:43.851819  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:44.008426  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:44.012770  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:44.201272  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:44.292861  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:44.509384  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:44.513316  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:44.702532  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:44.794586  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:45.008790  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:45.013901  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:45.201203  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:45.294065  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:45.510038  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:45.513043  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:45.700868  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:45.793213  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:45.852079  369702 pod_ready.go:103] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"False"
	I0127 12:13:46.008975  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:46.013846  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:46.202083  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:46.301543  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:46.350477  369702 pod_ready.go:93] pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace has status "Ready":"True"
	I0127 12:13:46.350505  369702 pod_ready.go:82] duration metric: took 18.505346083s for pod "amd-gpu-device-plugin-928wb" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.350514  369702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d6cj7" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.351966  369702 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-d6cj7" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-d6cj7" not found
	I0127 12:13:46.351990  369702 pod_ready.go:82] duration metric: took 1.468902ms for pod "coredns-668d6bf9bc-d6cj7" in "kube-system" namespace to be "Ready" ...
	E0127 12:13:46.351999  369702 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-d6cj7" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-d6cj7" not found
	I0127 12:13:46.352006  369702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-vzjzz" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.356021  369702 pod_ready.go:93] pod "coredns-668d6bf9bc-vzjzz" in "kube-system" namespace has status "Ready":"True"
	I0127 12:13:46.356048  369702 pod_ready.go:82] duration metric: took 4.035479ms for pod "coredns-668d6bf9bc-vzjzz" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.356056  369702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.360632  369702 pod_ready.go:93] pod "etcd-addons-645690" in "kube-system" namespace has status "Ready":"True"
	I0127 12:13:46.360646  369702 pod_ready.go:82] duration metric: took 4.585304ms for pod "etcd-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.360653  369702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.365090  369702 pod_ready.go:93] pod "kube-apiserver-addons-645690" in "kube-system" namespace has status "Ready":"True"
	I0127 12:13:46.365122  369702 pod_ready.go:82] duration metric: took 4.460526ms for pod "kube-apiserver-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.365133  369702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.508888  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:46.512910  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:46.547794  369702 pod_ready.go:93] pod "kube-controller-manager-addons-645690" in "kube-system" namespace has status "Ready":"True"
	I0127 12:13:46.547812  369702 pod_ready.go:82] duration metric: took 182.671391ms for pod "kube-controller-manager-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.547822  369702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6nvj9" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.700759  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:46.794453  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:46.950377  369702 pod_ready.go:93] pod "kube-proxy-6nvj9" in "kube-system" namespace has status "Ready":"True"
	I0127 12:13:46.950407  369702 pod_ready.go:82] duration metric: took 402.577486ms for pod "kube-proxy-6nvj9" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:46.950420  369702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:47.009186  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:47.012849  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:47.202700  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:47.294211  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:47.349051  369702 pod_ready.go:93] pod "kube-scheduler-addons-645690" in "kube-system" namespace has status "Ready":"True"
	I0127 12:13:47.349079  369702 pod_ready.go:82] duration metric: took 398.650611ms for pod "kube-scheduler-addons-645690" in "kube-system" namespace to be "Ready" ...
	I0127 12:13:47.349090  369702 pod_ready.go:39] duration metric: took 19.513862321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:13:47.349111  369702 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:13:47.349181  369702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:13:47.366842  369702 api_server.go:72] duration metric: took 22.089886073s to wait for apiserver process to appear ...
	I0127 12:13:47.366868  369702 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:13:47.366895  369702 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0127 12:13:47.371210  369702 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0127 12:13:47.371971  369702 api_server.go:141] control plane version: v1.32.1
	I0127 12:13:47.371991  369702 api_server.go:131] duration metric: took 5.114406ms to wait for apiserver health ...
	I0127 12:13:47.372000  369702 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:13:47.509367  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:47.513462  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:47.553957  369702 system_pods.go:59] 18 kube-system pods found
	I0127 12:13:47.553987  369702 system_pods.go:61] "amd-gpu-device-plugin-928wb" [3aa42640-b26b-490f-9565-5b1bce6bc1cd] Running
	I0127 12:13:47.553992  369702 system_pods.go:61] "coredns-668d6bf9bc-vzjzz" [7aedae7d-b60a-4219-849b-dfc5d8156423] Running
	I0127 12:13:47.553999  369702 system_pods.go:61] "csi-hostpath-attacher-0" [1ad2acdd-86cf-4cc4-b719-619f276775d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 12:13:47.554006  369702 system_pods.go:61] "csi-hostpath-resizer-0" [350abcc5-c641-411d-a412-c48abb8be959] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 12:13:47.554014  369702 system_pods.go:61] "csi-hostpathplugin-drdz9" [79449471-85b0-4af2-8f6c-3e766db14406] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 12:13:47.554018  369702 system_pods.go:61] "etcd-addons-645690" [f0cca8e2-7d85-47f4-86b0-d5b37dc42528] Running
	I0127 12:13:47.554027  369702 system_pods.go:61] "kube-apiserver-addons-645690" [0c522f8f-a298-41a1-be53-abb550578f65] Running
	I0127 12:13:47.554031  369702 system_pods.go:61] "kube-controller-manager-addons-645690" [b5edc007-02b8-404c-91b0-707450c6a0b4] Running
	I0127 12:13:47.554037  369702 system_pods.go:61] "kube-ingress-dns-minikube" [c1bfb48b-f866-484f-9298-7d608be66ab9] Running
	I0127 12:13:47.554040  369702 system_pods.go:61] "kube-proxy-6nvj9" [05ac194f-4401-4acd-95c9-693bb2532d53] Running
	I0127 12:13:47.554044  369702 system_pods.go:61] "kube-scheduler-addons-645690" [ab3c011c-6190-4ae1-8876-4d91223fd30c] Running
	I0127 12:13:47.554049  369702 system_pods.go:61] "metrics-server-7fbb699795-4kg4p" [7ee8de18-dd61-4774-8716-7815448549e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:13:47.554057  369702 system_pods.go:61] "nvidia-device-plugin-daemonset-sd4md" [5beaf7f5-9c24-418a-8c06-61c555ee367f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0127 12:13:47.554061  369702 system_pods.go:61] "registry-6c88467877-vcc5g" [97285ffb-54b7-4f66-b39c-a22b1e7c77d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 12:13:47.554067  369702 system_pods.go:61] "registry-proxy-89gbc" [0e35959d-068c-4f26-8edf-27cc6aef30b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 12:13:47.554076  369702 system_pods.go:61] "snapshot-controller-68b874b76f-bgqg6" [af45f91d-d4d6-449e-86d7-874ea36db428] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 12:13:47.554082  369702 system_pods.go:61] "snapshot-controller-68b874b76f-v8ldc" [d72c02f3-13eb-4851-bdcb-02b18bd63dd7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 12:13:47.554114  369702 system_pods.go:61] "storage-provisioner" [57eaae18-7e55-4d96-a5a0-a030dba09ac5] Running
	I0127 12:13:47.554120  369702 system_pods.go:74] duration metric: took 182.114292ms to wait for pod list to return data ...
	I0127 12:13:47.554127  369702 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:13:47.700724  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:47.748326  369702 default_sa.go:45] found service account: "default"
	I0127 12:13:47.748347  369702 default_sa.go:55] duration metric: took 194.214007ms for default service account to be created ...
	I0127 12:13:47.748358  369702 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:13:47.793247  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:47.956479  369702 system_pods.go:87] 18 kube-system pods found
	I0127 12:13:48.008343  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:48.013138  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:48.148722  369702 system_pods.go:105] "amd-gpu-device-plugin-928wb" [3aa42640-b26b-490f-9565-5b1bce6bc1cd] Running
	I0127 12:13:48.148750  369702 system_pods.go:105] "coredns-668d6bf9bc-vzjzz" [7aedae7d-b60a-4219-849b-dfc5d8156423] Running
	I0127 12:13:48.148761  369702 system_pods.go:105] "csi-hostpath-attacher-0" [1ad2acdd-86cf-4cc4-b719-619f276775d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0127 12:13:48.148772  369702 system_pods.go:105] "csi-hostpath-resizer-0" [350abcc5-c641-411d-a412-c48abb8be959] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0127 12:13:48.148781  369702 system_pods.go:105] "csi-hostpathplugin-drdz9" [79449471-85b0-4af2-8f6c-3e766db14406] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0127 12:13:48.148790  369702 system_pods.go:105] "etcd-addons-645690" [f0cca8e2-7d85-47f4-86b0-d5b37dc42528] Running
	I0127 12:13:48.148797  369702 system_pods.go:105] "kube-apiserver-addons-645690" [0c522f8f-a298-41a1-be53-abb550578f65] Running
	I0127 12:13:48.148806  369702 system_pods.go:105] "kube-controller-manager-addons-645690" [b5edc007-02b8-404c-91b0-707450c6a0b4] Running
	I0127 12:13:48.148820  369702 system_pods.go:105] "kube-ingress-dns-minikube" [c1bfb48b-f866-484f-9298-7d608be66ab9] Running
	I0127 12:13:48.148827  369702 system_pods.go:105] "kube-proxy-6nvj9" [05ac194f-4401-4acd-95c9-693bb2532d53] Running
	I0127 12:13:48.148836  369702 system_pods.go:105] "kube-scheduler-addons-645690" [ab3c011c-6190-4ae1-8876-4d91223fd30c] Running
	I0127 12:13:48.148842  369702 system_pods.go:105] "metrics-server-7fbb699795-4kg4p" [7ee8de18-dd61-4774-8716-7815448549e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 12:13:48.148853  369702 system_pods.go:105] "nvidia-device-plugin-daemonset-sd4md" [5beaf7f5-9c24-418a-8c06-61c555ee367f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0127 12:13:48.148863  369702 system_pods.go:105] "registry-6c88467877-vcc5g" [97285ffb-54b7-4f66-b39c-a22b1e7c77d3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0127 12:13:48.148872  369702 system_pods.go:105] "registry-proxy-89gbc" [0e35959d-068c-4f26-8edf-27cc6aef30b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0127 12:13:48.148881  369702 system_pods.go:105] "snapshot-controller-68b874b76f-bgqg6" [af45f91d-d4d6-449e-86d7-874ea36db428] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 12:13:48.148891  369702 system_pods.go:105] "snapshot-controller-68b874b76f-v8ldc" [d72c02f3-13eb-4851-bdcb-02b18bd63dd7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0127 12:13:48.148901  369702 system_pods.go:105] "storage-provisioner" [57eaae18-7e55-4d96-a5a0-a030dba09ac5] Running
	I0127 12:13:48.148916  369702 system_pods.go:147] duration metric: took 400.550047ms to wait for k8s-apps to be running ...
	I0127 12:13:48.148929  369702 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:13:48.148984  369702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:13:48.164501  369702 system_svc.go:56] duration metric: took 15.560136ms WaitForService to wait for kubelet
	I0127 12:13:48.164537  369702 kubeadm.go:582] duration metric: took 22.887587005s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:13:48.164567  369702 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:13:48.200370  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:48.293328  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:48.348884  369702 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:13:48.348929  369702 node_conditions.go:123] node cpu capacity is 2
	I0127 12:13:48.348945  369702 node_conditions.go:105] duration metric: took 184.367245ms to run NodePressure ...
	I0127 12:13:48.348959  369702 start.go:241] waiting for startup goroutines ...
	I0127 12:13:48.509179  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:48.513172  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:48.700862  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:48.800308  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:49.009814  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:49.015052  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:49.200989  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:49.293585  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:49.509497  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:49.513266  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:49.701364  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:49.793666  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:50.009668  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:50.015285  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:50.201484  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:50.301440  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:50.508778  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:50.513921  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:50.700509  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:50.794350  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:51.009206  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:51.013215  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:51.200847  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:51.293568  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:51.509642  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:51.513758  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:51.700946  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:51.792846  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:52.008910  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:52.012701  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:52.200763  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:52.292566  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:52.509398  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:52.513201  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:52.701057  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:52.793306  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:53.008768  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:53.013465  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:53.200505  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:53.294586  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:53.509372  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:53.513505  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:53.700736  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:53.794060  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:54.008558  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:54.013888  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:54.200541  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:54.297017  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:54.517806  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:54.518019  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:54.701325  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:54.794318  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:55.008717  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:55.013442  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:55.200056  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:55.292660  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:55.509325  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:55.513326  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:55.706677  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:55.794668  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:56.009132  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:56.013403  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:56.200988  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:56.293860  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:56.509560  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:56.513894  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:56.702271  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:56.793480  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:57.009880  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:57.014107  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:57.200270  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:57.293449  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:57.509313  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:57.512842  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:57.700337  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:57.793887  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:58.009130  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:58.013630  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:58.200440  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:58.294123  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:58.509290  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:58.513149  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:58.746927  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:58.881513  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:59.324478  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:59.324796  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:59.326181  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:13:59.326734  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:59.509104  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:13:59.513406  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:13:59.701356  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:13:59.801824  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:00.008878  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:00.012529  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:00.201335  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:00.293594  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:00.509841  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:00.513754  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:00.700863  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:00.794055  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:01.008619  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:01.013515  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:01.201189  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:01.293302  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:01.510563  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:01.513190  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:01.701562  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:01.794078  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:02.008729  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:02.013826  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:02.200676  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:02.293381  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:02.509151  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:02.513415  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:02.701209  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:02.793806  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:03.009481  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:03.013963  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:03.200915  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:03.293177  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:03.509654  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:03.513147  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:03.702550  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:03.793940  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:04.011140  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:04.014443  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:04.200921  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:04.294845  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:04.510095  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:04.513205  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:04.701611  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:04.794393  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:05.009752  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:05.015181  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:05.201240  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:05.293713  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:05.509824  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:05.513596  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:05.700336  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:05.794564  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:06.010446  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:06.015649  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:06.201512  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:06.293924  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:06.509142  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:06.513263  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:06.700930  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:06.793414  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:07.018375  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:07.018931  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:07.200689  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:07.293739  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:07.509507  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:07.513586  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:07.701260  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:07.793005  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:08.009959  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:08.013872  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:08.200497  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:08.294119  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:08.508259  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:08.512854  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:08.700470  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:08.793753  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:09.330609  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:09.330634  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:09.330755  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:09.332302  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:09.510189  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:09.512742  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:09.700749  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:09.794214  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:10.008516  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:10.013326  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:10.200489  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:10.294456  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:10.509106  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:10.513389  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:10.700702  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:10.793761  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:11.009199  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:11.013373  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:11.200682  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:11.293918  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:11.508542  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:11.513259  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:11.703801  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:11.794070  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:12.008215  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:12.012949  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:12.200748  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:12.294507  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:12.509571  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:12.513643  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:12.700951  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:12.793269  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:13.009102  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:13.013105  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:13.201358  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:13.293925  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:13.508915  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:13.512785  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:13.701583  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:13.793925  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:14.008861  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:14.013403  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:14.200198  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:14.293267  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:14.510173  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:14.513177  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:14.702021  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:14.793281  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:15.009366  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:15.013544  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:15.201457  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:15.294368  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:15.509793  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:15.514004  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:15.701169  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:15.793570  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:16.009540  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:16.013399  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:16.201549  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:16.293932  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:16.508332  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:16.513344  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:16.700524  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:16.793456  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:17.009174  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:17.012669  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:17.201100  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:17.293468  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:17.510194  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:17.513119  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:17.701131  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:17.793380  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:18.010027  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:18.012972  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:18.201263  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:18.293720  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:18.510239  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:18.512728  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:18.700712  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:18.795775  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:19.009533  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:19.013286  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:19.201535  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:19.293115  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:19.508737  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:19.514218  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:19.701396  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:19.793924  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:20.008191  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:20.012765  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:20.200891  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:20.293147  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:20.509094  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:20.513435  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:20.701122  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:20.793708  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:21.010114  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:21.013806  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:21.203046  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:21.293960  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:21.509404  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:21.513217  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:21.701055  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:21.809711  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:22.264810  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:22.265102  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:22.265165  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:22.293587  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:22.509801  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:22.514348  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0127 12:14:22.702910  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:22.802566  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:23.010054  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:23.013548  369702 kapi.go:107] duration metric: took 49.003414421s to wait for kubernetes.io/minikube-addons=registry ...
	I0127 12:14:23.200986  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:23.292969  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:23.509245  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:23.700850  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:23.792971  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:24.011940  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:24.200384  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:24.293259  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:24.509084  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:24.701441  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:24.802964  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:25.011141  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:25.200560  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:25.293438  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:25.509622  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:25.700338  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:25.793786  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:26.019198  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:26.201578  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:26.293410  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:26.509502  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:26.701286  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:26.793558  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:27.009416  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:27.200774  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:27.294127  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:27.508964  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:27.700996  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:27.793878  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:28.011004  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:28.200664  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:28.293913  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:28.509617  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:28.700622  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:28.794787  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:29.010235  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:29.201686  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:29.294719  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:29.510245  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:29.700835  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:29.794042  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:30.008866  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:30.200241  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:30.293802  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:30.509099  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:30.700735  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:30.794231  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:31.008551  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:31.200894  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:31.293614  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:31.510140  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:31.700648  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:31.793371  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:32.009483  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:32.200527  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:32.293954  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:32.508116  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:32.700902  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:32.793818  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:33.010151  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:33.200791  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:33.293774  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:33.509389  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:33.702950  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:33.793904  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:34.010296  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:34.200906  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:34.294670  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:34.508775  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:34.700012  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:34.792760  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:35.010917  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:35.201319  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:35.292873  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:35.930062  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:35.930878  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:35.931369  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:36.008934  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:36.201355  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:36.300595  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:36.509171  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:36.702095  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:36.793236  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:37.008882  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:37.201232  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:37.293295  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:37.508858  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:37.701316  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:37.793025  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:38.009502  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:38.201290  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:38.293359  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:38.509523  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:38.701966  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:38.793909  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:39.010078  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:39.200268  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:39.293324  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:39.509773  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:39.701451  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:39.795497  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:40.010623  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:40.200575  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:40.293522  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:40.508942  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:40.700589  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:40.793546  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:41.009546  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:41.201086  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:41.294105  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:41.508797  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:41.700739  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:41.794061  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:42.011803  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:42.200314  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:42.293318  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:42.509104  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:42.700719  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:42.794361  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:43.008943  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:43.200863  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:43.293660  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:43.509609  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:43.700985  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:43.799853  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:44.027870  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:44.200209  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:44.293283  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:44.508689  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:44.700311  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:44.793981  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:45.008960  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:45.201620  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:45.294004  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:45.509068  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:45.700878  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:45.793026  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:46.009027  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:46.200551  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:46.293675  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:46.509626  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:46.701457  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:46.794012  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:47.008904  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:47.201391  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:47.293595  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:47.509251  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:47.701706  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:47.794804  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:48.010358  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:48.203306  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:48.299765  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:48.509974  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:48.713181  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:48.793310  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:49.010035  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:49.206746  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:49.308155  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:49.509642  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:49.707423  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:49.803843  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:50.010707  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:50.214580  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:50.309653  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:50.508717  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:50.701667  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:50.794103  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:51.010361  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:51.201269  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:51.292971  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:51.508435  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:51.700570  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:51.804348  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:52.009406  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:52.206868  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:52.294141  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:52.509596  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:52.702027  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:52.793681  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:53.010449  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:53.203653  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:53.306622  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:53.509150  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:53.701191  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:53.793967  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:54.009321  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:54.201944  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:54.301607  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:54.509738  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:54.701089  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:54.793300  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:55.010253  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:55.200597  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:55.293962  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:55.508425  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:55.700648  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:55.794841  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:56.008386  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:56.638807  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:56.640160  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:56.642309  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:56.739401  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:56.838780  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:57.014409  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:57.201238  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:57.294780  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:57.509339  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:57.701278  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:57.793357  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:58.009474  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:58.202044  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:58.302896  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:58.511285  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:58.701618  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:58.794368  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:59.009743  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:59.200376  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:59.296114  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:14:59.508917  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:14:59.710368  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:14:59.807504  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:00.011636  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:00.201004  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:00.293254  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:00.509663  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:00.700350  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:00.799439  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:01.010254  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:01.200131  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:01.293329  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:01.509693  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:01.700321  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:01.793536  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:02.011989  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:02.201604  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:02.293665  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:02.510483  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:02.701131  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:02.801068  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:03.008306  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:03.201972  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:03.293544  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:03.509487  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:03.701023  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:03.793386  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:04.010769  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:04.201753  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0127 12:15:04.300358  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:04.508985  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:04.700483  369702 kapi.go:107] duration metric: took 1m30.004492471s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0127 12:15:04.793603  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:05.010137  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:05.294484  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:05.509319  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:05.795041  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:06.009436  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:06.293895  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:06.508646  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:06.793733  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:07.009602  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:07.292963  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:07.508694  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:07.793515  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:08.009279  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:08.293887  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:08.508440  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:08.793611  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:09.009482  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:09.294245  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:09.509516  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:09.793743  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:10.009986  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:10.293555  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:10.509953  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:10.793738  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:11.010138  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:11.293981  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:11.508646  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:11.793472  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:12.009658  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:12.293115  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:12.508956  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:12.793950  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:13.009381  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:13.294358  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:13.512064  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:13.794080  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:14.009607  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:14.293036  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:14.509099  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:14.794448  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:15.011264  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:15.294636  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:15.510168  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:15.793833  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:16.008887  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:16.293996  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:16.508638  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:16.793367  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:17.009756  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:17.293794  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:17.509342  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:17.794106  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:18.009831  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:18.293437  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:18.509523  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:18.793528  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:19.010209  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:19.294001  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:19.508753  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:19.793653  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:20.010226  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:20.293018  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:20.508810  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:20.793316  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:21.009530  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:21.293516  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:21.509466  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:21.793275  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:22.009065  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:22.293631  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:22.509729  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:22.795910  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:23.009279  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:23.294176  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:23.509941  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:23.793876  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:24.010775  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:24.293283  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:24.510849  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:24.793256  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:25.012191  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:25.293660  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:25.509513  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:25.794676  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:26.010069  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:26.293985  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:26.508974  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:26.793526  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:27.009964  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:27.293954  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:27.510572  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:27.793056  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:28.009568  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:28.294309  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:28.509284  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:28.795124  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:29.010001  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:29.293842  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:29.508897  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:29.794130  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:30.010125  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:30.293796  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:30.509503  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:30.794980  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:31.010870  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:31.294499  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:31.509238  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:31.794254  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:32.010516  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:32.293158  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:32.509147  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:32.793696  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:33.014740  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:33.294753  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:33.510391  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:33.794343  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:34.010266  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:34.293858  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:34.509727  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:34.795814  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:35.011515  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:35.293920  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:35.508932  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:35.793987  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:36.009849  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:36.293990  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:36.509352  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:36.793993  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:37.010023  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:37.293963  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:37.510066  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:37.793561  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:38.009867  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:38.293597  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:38.509375  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:38.793006  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:39.012205  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:39.294313  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:39.509189  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:39.794043  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:40.012455  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:40.294048  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:40.509569  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:40.793728  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:41.011506  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:41.293943  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:41.509985  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:41.796119  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:42.010745  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:42.293477  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:42.509763  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:42.793384  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:43.010916  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:43.293935  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:43.509917  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:43.793353  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:44.010739  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:44.295076  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:44.509805  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:44.793215  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:45.010105  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:45.293672  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:45.509331  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:45.794658  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:46.010707  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:46.294146  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:46.508952  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:46.793931  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:47.011241  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:47.294045  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:47.508927  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:47.794008  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:48.009454  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:48.292933  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:48.508709  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:48.793776  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:49.011020  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:49.293938  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:49.509020  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:49.793405  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:50.009603  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:50.293227  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:50.509046  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:50.795106  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:51.012769  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:51.294012  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:51.508884  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:51.793233  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:52.010361  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:52.294525  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:52.510000  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:52.795246  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:53.014315  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:53.295222  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:53.510033  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:53.793294  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:54.009586  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:54.293211  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:54.510174  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:54.793796  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:55.023603  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:55.293546  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:55.509535  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:55.794143  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:56.012934  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:56.294451  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:56.509215  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:56.794147  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:57.010454  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:57.293287  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:57.509001  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:57.793692  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:58.010016  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:58.300169  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:58.508809  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:58.793743  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:59.012186  369702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0127 12:15:59.294662  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:15:59.514020  369702 kapi.go:107] duration metric: took 2m25.509313405s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0127 12:15:59.793796  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:00.293643  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:00.805441  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:01.293683  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:01.794094  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:02.293913  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:02.793659  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:03.293708  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:03.793190  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:04.293962  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:04.794848  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:05.294943  369702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0127 12:16:05.793135  369702 kapi.go:107] duration metric: took 2m30.003187047s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0127 12:16:05.794899  369702 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-645690 cluster.
	I0127 12:16:05.796167  369702 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0127 12:16:05.797344  369702 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0127 12:16:05.798570  369702 out.go:177] * Enabled addons: storage-provisioner, storage-provisioner-rancher, ingress-dns, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0127 12:16:05.799638  369702 addons.go:514] duration metric: took 2m40.522627473s for enable addons: enabled=[storage-provisioner storage-provisioner-rancher ingress-dns cloud-spanner nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0127 12:16:05.799675  369702 start.go:246] waiting for cluster config update ...
	I0127 12:16:05.799696  369702 start.go:255] writing updated cluster config ...
	I0127 12:16:05.799952  369702 ssh_runner.go:195] Run: rm -f paused
	I0127 12:16:05.852319  369702 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:16:05.853949  369702 out.go:177] * Done! kubectl is now configured to use "addons-645690" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.608842002Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cc55d57-d6cc-4ae1-8230-e0f01a3b8d8e name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.610612620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbb1fd22-a9f8-419b-802a-fe205d8d0542 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.612100922Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980376612069117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbb1fd22-a9f8-419b-802a-fe205d8d0542 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.613069626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7474acca-74fb-4342-b6f1-d4e08cec7333 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.613144339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7474acca-74fb-4342-b6f1-d4e08cec7333 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.613716620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0f9acdc3fe2f25f0714510ee80fa715f2aff652192d8b5b7a7a87239be0afb0,PodSandboxId:d30e6e7a37501041408c1038d9765966317a6a73aa6f02f6fd8278b177ba60f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737980236571515455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445ee2737fb6741808fd8e457abf9a64ec4d755efc8ef9b75641b2033820cdce,PodSandboxId:4d08254f42182a46504a76dc37ce199d77cc396afb6b2ec00322418b0d272792,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737980174177884080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30dbd9e2-0420-4460-acb3-10f7edb4018c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b207e3a2043459fb347ab1d43389216ad3cce285d4b754621269235ad63e587e,PodSandboxId:e06725803bf6545ba5cf1d28f5c38ea8a713fa04f7adfa9fea05b0d9065c6e4b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737980159077463446,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nrz2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a0f2b1c-71b1-407d-af97-5a0bc851aa40,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7fef05a36a411c03206eb0de70e0ee9d14d5e10f993837e2ebf33d57a42bc488,PodSandboxId:e98016ff11abe4e619bd8f353ee7fa7200c27ec76a43b862958c7d16646be916,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737980096891039963,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k49p8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c0398a54-b1ea-47e5-9b24-e5c4c770e60d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c771e2af729878911f2639f3f8069b95ce43f8da9ff96c3c8019e516e66427e,PodSandboxId:f6772e7d7a215e06646c7ee1a570b3735e48886d15fc641caa65cba5b88bd1cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737980096679823336,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cc4sg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85432c89-a324-4773-be89-2bbfec0aa357,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9fab12835504a1dbe5f8effa8b10254d29c1bd758e347e6942448342b1251e2,PodSandboxId:4fff5b95dda8e074802f276ad81fdb70ff17ab8ef4d67fd6d4543a8e467843fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737980025869740344,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-928wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aa42640-b26b-490f-9565-5b1bce6bc1cd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:271e72d2fd7a3a3f6f9b672328cdbac4b50ca761912db3f71139864e340607d7,PodSandboxId:3b73acc52613dcc4fd16237ea5e6d138e677ef857fcb69536299be33f439dc60,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737980022154813931,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bfb48b-f866-484f-9298-7d608be66ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4100f775a2579f72d54b5b4a4b52f3ddc7f27c5e211eed8d07a0448b040ea13,PodSandboxId:b9fc04dcf918719fa883e62d59c323a780169dfe3471dea2a870dd6a839330b7,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980011381752835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eaae18-7e55-4d96-a5a0-a030dba09ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aeec06f28066d085a1720c08148e72c022d85d53a92620e2f94e53ad97c4b3,PodSandboxId:8f40df14522650542af4367198e1870479e81f6d9fb7c4bf8e275ca595dbd290,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980008773525474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-vzjzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aedae7d-b60a-4219-849b-dfc5d8156423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:8129f7c08ae63558773e045deb07dc150e710891d2b769836ec2e4c771d260ec,PodSandboxId:88d1afd66dcd87f8865829eeb83ee45fe097eec5be78ec28e5bb1bf6a1671e68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980005975514879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6nvj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ac194f-4401-4acd-95c9-693bb2532d53,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f15a1684bd57bb0a6f28279c12a
3ba17363c1b68c8d59adb43588888d85b333,PodSandboxId:5a32099963799987ec81ced408e99c545763b3c7b442f5db67d5bb605c123742,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737979995234226340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d688153239b508eff16b351467fcaa68,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6eadc87ddfa50e53facda00cd14059c10c69c076bc1
0a71f8c509295e8d4d8,PodSandboxId:1b056b46df5028384c54ce48e97c163c944ebeafc66e978652cdb540f17616dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737979995212435349,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6050c2fb3a2d208fac904f6f4b9487f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e9bcc9f6e87a445891c9ecabde87b0967826e67629c398609edb76cbc033
75,PodSandboxId:333472b5fc65f44176bac60799df7955a6f2ff9b40b98fe3b4a5ebf303285403,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737979995156830202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85595ee66b6074d8ae9ec2bbc5b6e030,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a022b8921d621bbd8cff7beb85efbaaee7f8ed013eee727a88da3c1e78395a,PodSandboxId:0b8b98dedd510dc39504a922137b09b
dffc77394de9c9b1691ff6bc309e68848,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737979995099399482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53715843a73f6429ecfd00b619b8da15,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7474acca-74fb-4342-b6f1-d4e08cec7333 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.648749352Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8050ef14-1d5d-4de0-bbe2-285f0795426a name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.648813472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8050ef14-1d5d-4de0-bbe2-285f0795426a name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.649656548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f06db4c6-28c7-4832-b7d2-20184016f2e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.650904335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980376650876644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f06db4c6-28c7-4832-b7d2-20184016f2e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.652467218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a475df3-5921-4418-bb8a-990a8c64bf7c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.652524634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a475df3-5921-4418-bb8a-990a8c64bf7c name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.652799823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0f9acdc3fe2f25f0714510ee80fa715f2aff652192d8b5b7a7a87239be0afb0,PodSandboxId:d30e6e7a37501041408c1038d9765966317a6a73aa6f02f6fd8278b177ba60f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737980236571515455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445ee2737fb6741808fd8e457abf9a64ec4d755efc8ef9b75641b2033820cdce,PodSandboxId:4d08254f42182a46504a76dc37ce199d77cc396afb6b2ec00322418b0d272792,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737980174177884080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30dbd9e2-0420-4460-acb3-10f7edb4018c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b207e3a2043459fb347ab1d43389216ad3cce285d4b754621269235ad63e587e,PodSandboxId:e06725803bf6545ba5cf1d28f5c38ea8a713fa04f7adfa9fea05b0d9065c6e4b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737980159077463446,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nrz2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a0f2b1c-71b1-407d-af97-5a0bc851aa40,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7fef05a36a411c03206eb0de70e0ee9d14d5e10f993837e2ebf33d57a42bc488,PodSandboxId:e98016ff11abe4e619bd8f353ee7fa7200c27ec76a43b862958c7d16646be916,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737980096891039963,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k49p8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c0398a54-b1ea-47e5-9b24-e5c4c770e60d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c771e2af729878911f2639f3f8069b95ce43f8da9ff96c3c8019e516e66427e,PodSandboxId:f6772e7d7a215e06646c7ee1a570b3735e48886d15fc641caa65cba5b88bd1cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737980096679823336,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cc4sg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85432c89-a324-4773-be89-2bbfec0aa357,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9fab12835504a1dbe5f8effa8b10254d29c1bd758e347e6942448342b1251e2,PodSandboxId:4fff5b95dda8e074802f276ad81fdb70ff17ab8ef4d67fd6d4543a8e467843fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737980025869740344,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-928wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aa42640-b26b-490f-9565-5b1bce6bc1cd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:271e72d2fd7a3a3f6f9b672328cdbac4b50ca761912db3f71139864e340607d7,PodSandboxId:3b73acc52613dcc4fd16237ea5e6d138e677ef857fcb69536299be33f439dc60,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737980022154813931,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bfb48b-f866-484f-9298-7d608be66ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4100f775a2579f72d54b5b4a4b52f3ddc7f27c5e211eed8d07a0448b040ea13,PodSandboxId:b9fc04dcf918719fa883e62d59c323a780169dfe3471dea2a870dd6a839330b7,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980011381752835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eaae18-7e55-4d96-a5a0-a030dba09ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aeec06f28066d085a1720c08148e72c022d85d53a92620e2f94e53ad97c4b3,PodSandboxId:8f40df14522650542af4367198e1870479e81f6d9fb7c4bf8e275ca595dbd290,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980008773525474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-vzjzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aedae7d-b60a-4219-849b-dfc5d8156423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:8129f7c08ae63558773e045deb07dc150e710891d2b769836ec2e4c771d260ec,PodSandboxId:88d1afd66dcd87f8865829eeb83ee45fe097eec5be78ec28e5bb1bf6a1671e68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980005975514879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6nvj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ac194f-4401-4acd-95c9-693bb2532d53,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f15a1684bd57bb0a6f28279c12a
3ba17363c1b68c8d59adb43588888d85b333,PodSandboxId:5a32099963799987ec81ced408e99c545763b3c7b442f5db67d5bb605c123742,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737979995234226340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d688153239b508eff16b351467fcaa68,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6eadc87ddfa50e53facda00cd14059c10c69c076bc1
0a71f8c509295e8d4d8,PodSandboxId:1b056b46df5028384c54ce48e97c163c944ebeafc66e978652cdb540f17616dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737979995212435349,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6050c2fb3a2d208fac904f6f4b9487f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e9bcc9f6e87a445891c9ecabde87b0967826e67629c398609edb76cbc033
75,PodSandboxId:333472b5fc65f44176bac60799df7955a6f2ff9b40b98fe3b4a5ebf303285403,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737979995156830202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85595ee66b6074d8ae9ec2bbc5b6e030,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a022b8921d621bbd8cff7beb85efbaaee7f8ed013eee727a88da3c1e78395a,PodSandboxId:0b8b98dedd510dc39504a922137b09b
dffc77394de9c9b1691ff6bc309e68848,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737979995099399482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53715843a73f6429ecfd00b619b8da15,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a475df3-5921-4418-bb8a-990a8c64bf7c name=/runtime.v1.RuntimeServ
ice/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.684830043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25f80d9b-1d15-4ddc-bc73-3ec35c18b77d name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.684897203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25f80d9b-1d15-4ddc-bc73-3ec35c18b77d name=/runtime.v1.RuntimeService/Version
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.687727739Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61372c0f-9b66-4b9f-b9b4-59d105a69fdb name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.688070404Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c2b4ad33df0300dc5cdaf5fdc094b8f28a756f111487ad7910e937455f1d4ad1,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-hgq2b,Uid:f756b4d6-e4c4-4856-8c6f-1be7d8fe8ecc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737980375605916263,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-hgq2b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f756b4d6-e4c4-4856-8c6f-1be7d8fe8ecc,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T12:19:35.297476744Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d30e6e7a37501041408c1038d9765966317a6a73aa6f02f6fd8278b177ba60f3,Metadata:&PodSandboxMetadata{Name:nginx,Uid:36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1737980230839202785,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T12:17:10.526732775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d08254f42182a46504a76dc37ce199d77cc396afb6b2ec00322418b0d272792,Metadata:&PodSandboxMetadata{Name:busybox,Uid:30dbd9e2-0420-4460-acb3-10f7edb4018c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737980168255125738,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30dbd9e2-0420-4460-acb3-10f7edb4018c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T12:16:07.941686516Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e06725803bf6545ba5cf1
d28f5c38ea8a713fa04f7adfa9fea05b0d9065c6e4b,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-nrz2q,Uid:3a0f2b1c-71b1-407d-af97-5a0bc851aa40,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737980151694761705,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nrz2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a0f2b1c-71b1-407d-af97-5a0bc851aa40,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T12:13:33.819951471Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9fc04dcf918719fa883e62d59c323a780169dfe3471dea2a870dd6a839330b7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:57eaae18-7e55-4d96-a5a0-a030dba09ac5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,
CreatedAt:1737980010453029991,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eaae18-7e55-4d96-a5a0-a030dba09ac5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"D
irectory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-27T12:13:30.138866206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3b73acc52613dcc4fd16237ea5e6d138e677ef857fcb69536299be33f439dc60,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:c1bfb48b-f866-484f-9298-7d608be66ab9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737980010253415595,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bfb48b-f866-484f-9298-7d608be66ab9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"container
s\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-01-27T12:13:29.637648961Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4fff5b95dda8e074802f276ad81fdb70ff17ab8ef4d67fd6d4543a8e467843fa,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-928wb,Uid:3aa42640-b26b-490f-9565-5b1bce6bc1cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737980007848960218,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-928wb,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 3aa42640-b26b-490f-9565-5b1bce6bc1cd,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T12:13:27.532183984Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f40df14522650542af4367198e1870479e81f6d9fb7c4bf8e275ca595dbd290,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-vzjzz,Uid:7aedae7d-b60a-4219-849b-dfc5d8156423,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737980005798838524,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-vzjzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aedae7d-b60a-4219-849b-dfc5d8156423,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T12:13:25.456388095Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d1afd66dcd87f8865829eeb83ee45fe097ee
c5be78ec28e5bb1bf6a1671e68,Metadata:&PodSandboxMetadata{Name:kube-proxy-6nvj9,Uid:05ac194f-4401-4acd-95c9-693bb2532d53,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737980005559473373,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6nvj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ac194f-4401-4acd-95c9-693bb2532d53,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-27T12:13:25.144856279Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:333472b5fc65f44176bac60799df7955a6f2ff9b40b98fe3b4a5ebf303285403,Metadata:&PodSandboxMetadata{Name:etcd-addons-645690,Uid:85595ee66b6074d8ae9ec2bbc5b6e030,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737979994976823785,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-645690,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 85595ee66b6074d8ae9ec2bbc5b6e030,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.68:2379,kubernetes.io/config.hash: 85595ee66b6074d8ae9ec2bbc5b6e030,kubernetes.io/config.seen: 2025-01-27T12:13:14.490029787Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1b056b46df5028384c54ce48e97c163c944ebeafc66e978652cdb540f17616dd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-645690,Uid:a6050c2fb3a2d208fac904f6f4b9487f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737979994973888935,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6050c2fb3a2d208fac904f6f4b9487f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.68:8443,kubernetes.i
o/config.hash: a6050c2fb3a2d208fac904f6f4b9487f,kubernetes.io/config.seen: 2025-01-27T12:13:14.490031249Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b8b98dedd510dc39504a922137b09bdffc77394de9c9b1691ff6bc309e68848,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-645690,Uid:53715843a73f6429ecfd00b619b8da15,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737979994955991731,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53715843a73f6429ecfd00b619b8da15,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 53715843a73f6429ecfd00b619b8da15,kubernetes.io/config.seen: 2025-01-27T12:13:14.490033786Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a32099963799987ec81ced408e99c545763b3c7b442f5db67d5bb605c123742,Metadata:&PodSandboxMetadata{Name:
kube-scheduler-addons-645690,Uid:d688153239b508eff16b351467fcaa68,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737979994955943358,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d688153239b508eff16b351467fcaa68,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d688153239b508eff16b351467fcaa68,kubernetes.io/config.seen: 2025-01-27T12:13:14.490025897Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=61372c0f-9b66-4b9f-b9b4-59d105a69fdb name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.689139336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc41dd0e-4522-4ac7-a3c0-09718da9523c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.689743806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23c64718-30b9-47e1-b856-fc3c4fb99df3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.689793904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23c64718-30b9-47e1-b856-fc3c4fb99df3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.690060693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0f9acdc3fe2f25f0714510ee80fa715f2aff652192d8b5b7a7a87239be0afb0,PodSandboxId:d30e6e7a37501041408c1038d9765966317a6a73aa6f02f6fd8278b177ba60f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737980236571515455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445ee2737fb6741808fd8e457abf9a64ec4d755efc8ef9b75641b2033820cdce,PodSandboxId:4d08254f42182a46504a76dc37ce199d77cc396afb6b2ec00322418b0d272792,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737980174177884080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30dbd9e2-0420-4460-acb3-10f7edb4018c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b207e3a2043459fb347ab1d43389216ad3cce285d4b754621269235ad63e587e,PodSandboxId:e06725803bf6545ba5cf1d28f5c38ea8a713fa04f7adfa9fea05b0d9065c6e4b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737980159077463446,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nrz2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a0f2b1c-71b1-407d-af97-5a0bc851aa40,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a9fab12835504a1dbe5f8effa8b10254d29c1bd758e347e6942448342b1251e2,PodSandboxId:4fff5b95dda8e074802f276ad81fdb70ff17ab8ef4d67fd6d4543a8e467843fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2b
b6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737980025869740344,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-928wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aa42640-b26b-490f-9565-5b1bce6bc1cd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:271e72d2fd7a3a3f6f9b672328cdbac4b50ca761912db3f71139864e340607d7,PodSandboxId:3b73acc52613dcc4fd16237ea5e6d138e677ef857fcb69536299be33f439dc60,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737980022154813931,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bfb48b-f866-484f-9298-7d608be66ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4100f775a2579f72d54b5b4a4b52f3ddc7f27c5e211eed8d07a0448b040ea13,PodSandboxId:b9fc04dcf918719fa883e62d59c323a780169dfe3471dea2a870dd6a839330b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f56
17342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980011381752835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eaae18-7e55-4d96-a5a0-a030dba09ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aeec06f28066d085a1720c08148e72c022d85d53a92620e2f94e53ad97c4b3,PodSandboxId:8f40df14522650542af4367198e1870479e81f6d9fb7c4bf8e275ca595dbd290,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f9159
6bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980008773525474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-vzjzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aedae7d-b60a-4219-849b-dfc5d8156423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8129f7c08ae63558773e045deb07dc150e710891d2b769836ec2e4c771d260ec,Pod
SandboxId:88d1afd66dcd87f8865829eeb83ee45fe097eec5be78ec28e5bb1bf6a1671e68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980005975514879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6nvj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ac194f-4401-4acd-95c9-693bb2532d53,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f15a1684bd57bb0a6f28279c12a3ba17363c1b68c8d59adb43588888d85b333,PodSandboxId:5a32099963799987ec81ced
408e99c545763b3c7b442f5db67d5bb605c123742,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737979995234226340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d688153239b508eff16b351467fcaa68,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6eadc87ddfa50e53facda00cd14059c10c69c076bc10a71f8c509295e8d4d8,PodSandboxId:1b056b46df5028384c54ce48e97c163c944ebeaf
c66e978652cdb540f17616dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737979995212435349,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6050c2fb3a2d208fac904f6f4b9487f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e9bcc9f6e87a445891c9ecabde87b0967826e67629c398609edb76cbc03375,PodSandboxId:333472b5fc65f44176bac60799df7955a6f2ff9b40b98fe3b4a5ebf30
3285403,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737979995156830202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85595ee66b6074d8ae9ec2bbc5b6e030,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a022b8921d621bbd8cff7beb85efbaaee7f8ed013eee727a88da3c1e78395a,PodSandboxId:0b8b98dedd510dc39504a922137b09bdffc77394de9c9b1691ff6bc309e68848,Metadata:&ContainerMetadata{Name:kube-c
ontroller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737979995099399482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53715843a73f6429ecfd00b619b8da15,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23c64718-30b9-47e1-b856-fc3c4fb99df3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.692054631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980376692033351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc41dd0e-4522-4ac7-a3c0-09718da9523c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.692925876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f289865a-3c24-4cd7-a163-a3be04b5e4d0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.693057564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f289865a-3c24-4cd7-a163-a3be04b5e4d0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 12:19:36 addons-645690 crio[660]: time="2025-01-27 12:19:36.693437582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0f9acdc3fe2f25f0714510ee80fa715f2aff652192d8b5b7a7a87239be0afb0,PodSandboxId:d30e6e7a37501041408c1038d9765966317a6a73aa6f02f6fd8278b177ba60f3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737980236571515455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 36b1d7e0-3d7b-43ab-9b8d-0887cb4e17d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:445ee2737fb6741808fd8e457abf9a64ec4d755efc8ef9b75641b2033820cdce,PodSandboxId:4d08254f42182a46504a76dc37ce199d77cc396afb6b2ec00322418b0d272792,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737980174177884080,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30dbd9e2-0420-4460-acb3-10f7edb4018c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b207e3a2043459fb347ab1d43389216ad3cce285d4b754621269235ad63e587e,PodSandboxId:e06725803bf6545ba5cf1d28f5c38ea8a713fa04f7adfa9fea05b0d9065c6e4b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737980159077463446,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-nrz2q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3a0f2b1c-71b1-407d-af97-5a0bc851aa40,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7fef05a36a411c03206eb0de70e0ee9d14d5e10f993837e2ebf33d57a42bc488,PodSandboxId:e98016ff11abe4e619bd8f353ee7fa7200c27ec76a43b862958c7d16646be916,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737980096891039963,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k49p8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c0398a54-b1ea-47e5-9b24-e5c4c770e60d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c771e2af729878911f2639f3f8069b95ce43f8da9ff96c3c8019e516e66427e,PodSandboxId:f6772e7d7a215e06646c7ee1a570b3735e48886d15fc641caa65cba5b88bd1cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737980096679823336,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-cc4sg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85432c89-a324-4773-be89-2bbfec0aa357,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9fab12835504a1dbe5f8effa8b10254d29c1bd758e347e6942448342b1251e2,PodSandboxId:4fff5b95dda8e074802f276ad81fdb70ff17ab8ef4d67fd6d4543a8e467843fa,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737980025869740344,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-928wb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aa42640-b26b-490f-9565-5b1bce6bc1cd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:271e72d2fd7a3a3f6f9b672328cdbac4b50ca761912db3f71139864e340607d7,PodSandboxId:3b73acc52613dcc4fd16237ea5e6d138e677ef857fcb69536299be33f439dc60,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737980022154813931,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1bfb48b-f866-484f-9298-7d608be66ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4100f775a2579f72d54b5b4a4b52f3ddc7f27c5e211eed8d07a0448b040ea13,PodSandboxId:b9fc04dcf918719fa883e62d59c323a780169dfe3471dea2a870dd6a839330b7,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737980011381752835,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eaae18-7e55-4d96-a5a0-a030dba09ac5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aeec06f28066d085a1720c08148e72c022d85d53a92620e2f94e53ad97c4b3,PodSandboxId:8f40df14522650542af4367198e1870479e81f6d9fb7c4bf8e275ca595dbd290,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737980008773525474,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-vzjzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aedae7d-b60a-4219-849b-dfc5d8156423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:8129f7c08ae63558773e045deb07dc150e710891d2b769836ec2e4c771d260ec,PodSandboxId:88d1afd66dcd87f8865829eeb83ee45fe097eec5be78ec28e5bb1bf6a1671e68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737980005975514879,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6nvj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05ac194f-4401-4acd-95c9-693bb2532d53,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f15a1684bd57bb0a6f28279c12a
3ba17363c1b68c8d59adb43588888d85b333,PodSandboxId:5a32099963799987ec81ced408e99c545763b3c7b442f5db67d5bb605c123742,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737979995234226340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d688153239b508eff16b351467fcaa68,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb6eadc87ddfa50e53facda00cd14059c10c69c076bc1
0a71f8c509295e8d4d8,PodSandboxId:1b056b46df5028384c54ce48e97c163c944ebeafc66e978652cdb540f17616dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737979995212435349,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6050c2fb3a2d208fac904f6f4b9487f,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21e9bcc9f6e87a445891c9ecabde87b0967826e67629c398609edb76cbc033
75,PodSandboxId:333472b5fc65f44176bac60799df7955a6f2ff9b40b98fe3b4a5ebf303285403,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737979995156830202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85595ee66b6074d8ae9ec2bbc5b6e030,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a022b8921d621bbd8cff7beb85efbaaee7f8ed013eee727a88da3c1e78395a,PodSandboxId:0b8b98dedd510dc39504a922137b09b
dffc77394de9c9b1691ff6bc309e68848,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737979995099399482,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-645690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53715843a73f6429ecfd00b619b8da15,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f289865a-3c24-4cd7-a163-a3be04b5e4d0 name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a0f9acdc3fe2f       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   d30e6e7a37501       nginx
	445ee2737fb67       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   4d08254f42182       busybox
	b207e3a204345       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   e06725803bf65       ingress-nginx-controller-56d7c84fd4-nrz2q
	7fef05a36a411       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   e98016ff11abe       ingress-nginx-admission-patch-k49p8
	0c771e2af7298       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   f6772e7d7a215       ingress-nginx-admission-create-cc4sg
	a9fab12835504       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   4fff5b95dda8e       amd-gpu-device-plugin-928wb
	271e72d2fd7a3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   3b73acc52613d       kube-ingress-dns-minikube
	f4100f775a257       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             6 minutes ago       Running             storage-provisioner       0                   b9fc04dcf9187       storage-provisioner
	16aeec06f2806       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             6 minutes ago       Running             coredns                   0                   8f40df1452265       coredns-668d6bf9bc-vzjzz
	8129f7c08ae63       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             6 minutes ago       Running             kube-proxy                0                   88d1afd66dcd8       kube-proxy-6nvj9
	6f15a1684bd57       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             6 minutes ago       Running             kube-scheduler            0                   5a32099963799       kube-scheduler-addons-645690
	fb6eadc87ddfa       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             6 minutes ago       Running             kube-apiserver            0                   1b056b46df502       kube-apiserver-addons-645690
	21e9bcc9f6e87       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             6 minutes ago       Running             etcd                      0                   333472b5fc65f       etcd-addons-645690
	65a022b8921d6       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             6 minutes ago       Running             kube-controller-manager   0                   0b8b98dedd510       kube-controller-manager-addons-645690
	
	
	==> coredns [16aeec06f28066d085a1720c08148e72c022d85d53a92620e2f94e53ad97c4b3] <==
	[INFO] 10.244.0.7:40918 - 62950 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000342301s
	[INFO] 10.244.0.7:40918 - 37511 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000100859s
	[INFO] 10.244.0.7:40918 - 54843 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000138473s
	[INFO] 10.244.0.7:40918 - 24972 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000148919s
	[INFO] 10.244.0.7:40918 - 19332 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000091867s
	[INFO] 10.244.0.7:40918 - 32987 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000093445s
	[INFO] 10.244.0.7:40918 - 41978 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000067568s
	[INFO] 10.244.0.7:45538 - 52428 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009088s
	[INFO] 10.244.0.7:45538 - 52162 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000061857s
	[INFO] 10.244.0.7:48779 - 2131 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059739s
	[INFO] 10.244.0.7:48779 - 1921 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000332529s
	[INFO] 10.244.0.7:59656 - 5040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000486788s
	[INFO] 10.244.0.7:59656 - 5332 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066155s
	[INFO] 10.244.0.7:38485 - 61130 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000115058s
	[INFO] 10.244.0.7:38485 - 61359 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072396s
	[INFO] 10.244.0.23:58054 - 1028 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000414828s
	[INFO] 10.244.0.23:33161 - 10693 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00011674s
	[INFO] 10.244.0.23:59392 - 45227 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120857s
	[INFO] 10.244.0.23:55831 - 61733 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000374501s
	[INFO] 10.244.0.23:42961 - 43852 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088974s
	[INFO] 10.244.0.23:53344 - 19782 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130659s
	[INFO] 10.244.0.23:54369 - 12342 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004766377s
	[INFO] 10.244.0.23:54470 - 8343 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.006180172s
	[INFO] 10.244.0.27:35096 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0009284s
	[INFO] 10.244.0.27:52519 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000185956s
	
	
	==> describe nodes <==
	Name:               addons-645690
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-645690
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=addons-645690
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_13_21_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-645690
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:13:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-645690
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:19:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:17:26 +0000   Mon, 27 Jan 2025 12:13:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:17:26 +0000   Mon, 27 Jan 2025 12:13:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:17:26 +0000   Mon, 27 Jan 2025 12:13:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:17:26 +0000   Mon, 27 Jan 2025 12:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    addons-645690
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 ead59a4e18ad4a8c9d73af510b107d4a
	  System UUID:                ead59a4e-18ad-4a8c-9d73-af510b107d4a
	  Boot ID:                    bc5a19ae-31b7-43f6-bf31-d6aa912335c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  default                     hello-world-app-7d9564db4-hgq2b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-nrz2q    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         6m3s
	  kube-system                 amd-gpu-device-plugin-928wb                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 coredns-668d6bf9bc-vzjzz                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m11s
	  kube-system                 etcd-addons-645690                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m16s
	  kube-system                 kube-apiserver-addons-645690                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-addons-645690        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-proxy-6nvj9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-addons-645690                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node addons-645690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s (x8 over 6m22s)  kubelet          Node addons-645690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s (x7 over 6m22s)  kubelet          Node addons-645690 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s                  kubelet          Node addons-645690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s                  kubelet          Node addons-645690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s                  kubelet          Node addons-645690 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m15s                  kubelet          Node addons-645690 status is now: NodeReady
	  Normal  RegisteredNode           6m12s                  node-controller  Node addons-645690 event: Registered Node addons-645690 in Controller
	  Normal  CIDRAssignmentFailed     6m12s                  cidrAllocator    Node addons-645690 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +4.137480] systemd-fstab-generator[867]: Ignoring "noauto" option for root device
	[  +0.057257] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.347126] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.081002] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.913119] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +0.168065] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.052259] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.215477] kauditd_printk_skb: 142 callbacks suppressed
	[  +6.390596] kauditd_printk_skb: 60 callbacks suppressed
	[Jan27 12:14] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.622846] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.335578] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.135600] kauditd_printk_skb: 25 callbacks suppressed
	[Jan27 12:15] kauditd_printk_skb: 39 callbacks suppressed
	[ +55.002767] kauditd_printk_skb: 15 callbacks suppressed
	[Jan27 12:16] kauditd_printk_skb: 9 callbacks suppressed
	[ +23.978855] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.939563] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.990542] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.460180] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.153713] kauditd_printk_skb: 55 callbacks suppressed
	[Jan27 12:17] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.395744] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.076794] kauditd_printk_skb: 20 callbacks suppressed
	[ +16.409433] kauditd_printk_skb: 52 callbacks suppressed
	
	
	==> etcd [21e9bcc9f6e87a445891c9ecabde87b0967826e67629c398609edb76cbc03375] <==
	{"level":"warn","ts":"2025-01-27T12:14:35.895550Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:14:35.475728Z","time spent":"419.817348ms","remote":"127.0.0.1:54934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T12:14:35.895605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.225382ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:14:35.895620Z","caller":"traceutil/trace.go:171","msg":"trace[97768073] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:993; }","duration":"228.264101ms","start":"2025-01-27T12:14:35.667351Z","end":"2025-01-27T12:14:35.895616Z","steps":["trace[97768073] 'agreement among raft nodes before linearized reading'  (duration: 228.24239ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:14:56.601216Z","caller":"traceutil/trace.go:171","msg":"trace[102156333] linearizableReadLoop","detail":"{readStateIndex:1102; appliedIndex:1101; }","duration":"434.467375ms","start":"2025-01-27T12:14:56.166723Z","end":"2025-01-27T12:14:56.601190Z","steps":["trace[102156333] 'read index received'  (duration: 434.324413ms)","trace[102156333] 'applied index is now lower than readState.Index'  (duration: 142.54µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:14:56.601431Z","caller":"traceutil/trace.go:171","msg":"trace[1358128063] transaction","detail":"{read_only:false; response_revision:1066; number_of_response:1; }","duration":"507.271241ms","start":"2025-01-27T12:14:56.094152Z","end":"2025-01-27T12:14:56.601423Z","steps":["trace[1358128063] 'process raft request'  (duration: 506.932355ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:14:56.601545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.71404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:14:56.601597Z","caller":"traceutil/trace.go:171","msg":"trace[1923607973] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"340.796653ms","start":"2025-01-27T12:14:56.260792Z","end":"2025-01-27T12:14:56.601588Z","steps":["trace[1923607973] 'agreement among raft nodes before linearized reading'  (duration: 340.70417ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:14:56.601617Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:14:56.260777Z","time spent":"340.834428ms","remote":"127.0.0.1:54934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T12:14:56.601740Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"435.017665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:14:56.601774Z","caller":"traceutil/trace.go:171","msg":"trace[1600054745] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"435.075585ms","start":"2025-01-27T12:14:56.166692Z","end":"2025-01-27T12:14:56.601768Z","steps":["trace[1600054745] 'agreement among raft nodes before linearized reading'  (duration: 435.033921ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:14:56.601866Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:14:56.166674Z","time spent":"435.186629ms","remote":"127.0.0.1:54934","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T12:14:56.602105Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.269419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:14:56.602149Z","caller":"traceutil/trace.go:171","msg":"trace[1197738912] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1066; }","duration":"127.327969ms","start":"2025-01-27T12:14:56.474809Z","end":"2025-01-27T12:14:56.602137Z","steps":["trace[1197738912] 'agreement among raft nodes before linearized reading'  (duration: 127.279665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:14:56.601571Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T12:14:56.094133Z","time spent":"507.337551ms","remote":"127.0.0.1:54904","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1056 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-01-27T12:16:47.363155Z","caller":"traceutil/trace.go:171","msg":"trace[818059528] transaction","detail":"{read_only:false; response_revision:1500; number_of_response:1; }","duration":"120.255548ms","start":"2025-01-27T12:16:47.242828Z","end":"2025-01-27T12:16:47.363084Z","steps":["trace[818059528] 'process raft request'  (duration: 120.139237ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:16:57.773141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.998568ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:16:57.773270Z","caller":"traceutil/trace.go:171","msg":"trace[2105338493] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1586; }","duration":"183.241103ms","start":"2025-01-27T12:16:57.590009Z","end":"2025-01-27T12:16:57.773251Z","steps":["trace[2105338493] 'range keys from in-memory index tree'  (duration: 182.987191ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:16:57.773394Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.337629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3725"}
	{"level":"info","ts":"2025-01-27T12:16:57.773426Z","caller":"traceutil/trace.go:171","msg":"trace[1293891138] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:1586; }","duration":"169.4213ms","start":"2025-01-27T12:16:57.603995Z","end":"2025-01-27T12:16:57.773416Z","steps":["trace[1293891138] 'range keys from in-memory index tree'  (duration: 169.146173ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:17:05.766243Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.847665ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:17:05.766412Z","caller":"traceutil/trace.go:171","msg":"trace[1360264892] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1678; }","duration":"177.059733ms","start":"2025-01-27T12:17:05.589337Z","end":"2025-01-27T12:17:05.766397Z","steps":["trace[1360264892] 'range keys from in-memory index tree'  (duration: 176.835915ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:17:05.766824Z","caller":"traceutil/trace.go:171","msg":"trace[284334321] transaction","detail":"{read_only:false; response_revision:1679; number_of_response:1; }","duration":"179.937062ms","start":"2025-01-27T12:17:05.586876Z","end":"2025-01-27T12:17:05.766813Z","steps":["trace[284334321] 'process raft request'  (duration: 178.777469ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:17:06.050613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.284475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-27T12:17:06.050873Z","caller":"traceutil/trace.go:171","msg":"trace[1244279848] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1679; }","duration":"138.673737ms","start":"2025-01-27T12:17:05.912187Z","end":"2025-01-27T12:17:06.050861Z","steps":["trace[1244279848] 'count revisions from in-memory index tree'  (duration: 138.164096ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:17:11.605617Z","caller":"traceutil/trace.go:171","msg":"trace[1011072023] transaction","detail":"{read_only:false; response_revision:1741; number_of_response:1; }","duration":"129.761758ms","start":"2025-01-27T12:17:11.475842Z","end":"2025-01-27T12:17:11.605604Z","steps":["trace[1011072023] 'process raft request'  (duration: 129.643868ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:19:37 up 6 min,  0 users,  load average: 0.18, 0.78, 0.51
	Linux addons-645690 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fb6eadc87ddfa50e53facda00cd14059c10c69c076bc10a71f8c509295e8d4d8] <==
	E0127 12:14:26.035159       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0127 12:16:21.113677       1 conn.go:339] Error on socket receive: read tcp 192.168.39.68:8443->192.168.39.1:58912: use of closed network connection
	E0127 12:16:21.308105       1 conn.go:339] Error on socket receive: read tcp 192.168.39.68:8443->192.168.39.1:58946: use of closed network connection
	I0127 12:16:54.934402       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.45.80"}
	I0127 12:16:56.001176       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0127 12:17:04.867703       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0127 12:17:06.007495       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0127 12:17:10.394686       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0127 12:17:10.567833       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.204.18"}
	I0127 12:17:15.590855       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 12:17:15.590973       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 12:17:15.642964       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 12:17:15.643021       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 12:17:15.685624       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 12:17:15.685659       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 12:17:15.744088       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 12:17:15.744144       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0127 12:17:15.853952       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0127 12:17:15.853984       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0127 12:17:16.744466       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0127 12:17:16.854027       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0127 12:17:16.878587       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0127 12:17:17.159026       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0127 12:17:27.000355       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0127 12:19:35.482187       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.186.128"}
	
	
	==> kube-controller-manager [65a022b8921d621bbd8cff7beb85efbaaee7f8ed013eee727a88da3c1e78395a] <==
	E0127 12:18:29.511136       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 12:18:33.115638       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 12:18:33.116715       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 12:18:33.117661       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 12:18:33.117688       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 12:18:54.620126       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 12:18:54.621179       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0127 12:18:54.622234       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 12:18:54.622382       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 12:19:10.932987       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 12:19:10.933979       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0127 12:19:10.934751       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 12:19:10.934786       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 12:19:16.108521       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 12:19:16.109439       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0127 12:19:16.110239       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 12:19:16.110267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0127 12:19:27.359656       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0127 12:19:27.360748       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0127 12:19:27.361653       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0127 12:19:27.361705       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0127 12:19:35.303804       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="35.733691ms"
	I0127 12:19:35.326617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="22.755489ms"
	I0127 12:19:35.327026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="32.532µs"
	I0127 12:19:35.327132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="18.633µs"
	
	
	==> kube-proxy [8129f7c08ae63558773e045deb07dc150e710891d2b769836ec2e4c771d260ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:13:26.939941       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:13:26.973000       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.68"]
	E0127 12:13:26.973698       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:13:27.058247       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:13:27.058349       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:13:27.058370       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:13:27.060804       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:13:27.061040       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:13:27.061053       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:13:27.062071       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:13:27.062128       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:13:27.062173       1 config.go:199] "Starting service config controller"
	I0127 12:13:27.062176       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:13:27.063005       1 config.go:329] "Starting node config controller"
	I0127 12:13:27.063032       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:13:27.162715       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:13:27.162830       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:13:27.163071       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f15a1684bd57bb0a6f28279c12a3ba17363c1b68c8d59adb43588888d85b333] <==
	W0127 12:13:17.756045       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:13:17.756074       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:17.756112       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:13:17.756138       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:17.756203       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:13:17.756213       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 12:13:18.571286       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:13:18.571368       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:18.593992       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 12:13:18.594086       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 12:13:18.596449       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 12:13:18.597026       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:18.605749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:13:18.606180       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:18.629857       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:13:18.629920       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:18.864764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:13:18.864925       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:18.921415       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 12:13:18.921546       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:18.931936       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 12:13:18.932011       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:13:18.971828       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 12:13:18.972154       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:13:20.922648       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:19:10 addons-645690 kubelet[1220]: E0127 12:19:10.842650    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980350842101360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:19:10 addons-645690 kubelet[1220]: E0127 12:19:10.842695    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980350842101360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:19:20 addons-645690 kubelet[1220]: E0127 12:19:20.730946    1220 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:19:20 addons-645690 kubelet[1220]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:19:20 addons-645690 kubelet[1220]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:19:20 addons-645690 kubelet[1220]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:19:20 addons-645690 kubelet[1220]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:19:20 addons-645690 kubelet[1220]: E0127 12:19:20.847411    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980360846892131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:19:20 addons-645690 kubelet[1220]: E0127 12:19:20.847454    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980360846892131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:19:30 addons-645690 kubelet[1220]: E0127 12:19:30.850447    1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980370850049250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:19:30 addons-645690 kubelet[1220]: E0127 12:19:30.850533    1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737980370850049250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595279,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297749    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="79449471-85b0-4af2-8f6c-3e766db14406" containerName="csi-external-health-monitor-controller"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297798    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="79449471-85b0-4af2-8f6c-3e766db14406" containerName="node-driver-registrar"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297806    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="79449471-85b0-4af2-8f6c-3e766db14406" containerName="hostpath"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297812    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="79449471-85b0-4af2-8f6c-3e766db14406" containerName="csi-snapshotter"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297818    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="f6bf04e6-cbcc-4b98-ab6c-50a8811890a5" containerName="headlamp"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297823    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="79449471-85b0-4af2-8f6c-3e766db14406" containerName="csi-provisioner"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297827    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="8f13e21c-71c7-44b3-bc6c-f0b66e7d354a" containerName="task-pv-container"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297832    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="79449471-85b0-4af2-8f6c-3e766db14406" containerName="liveness-probe"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297837    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="1ad2acdd-86cf-4cc4-b719-619f276775d7" containerName="csi-attacher"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297842    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="af45f91d-d4d6-449e-86d7-874ea36db428" containerName="volume-snapshot-controller"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297847    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="350abcc5-c641-411d-a412-c48abb8be959" containerName="csi-resizer"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297851    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="32aaa4e9-5a5b-457c-ac26-acd8ce22ab58" containerName="local-path-provisioner"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.297856    1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="d72c02f3-13eb-4851-bdcb-02b18bd63dd7" containerName="volume-snapshot-controller"
	Jan 27 12:19:35 addons-645690 kubelet[1220]: I0127 12:19:35.434260    1220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfl5n\" (UniqueName: \"kubernetes.io/projected/f756b4d6-e4c4-4856-8c6f-1be7d8fe8ecc-kube-api-access-cfl5n\") pod \"hello-world-app-7d9564db4-hgq2b\" (UID: \"f756b4d6-e4c4-4856-8c6f-1be7d8fe8ecc\") " pod="default/hello-world-app-7d9564db4-hgq2b"
	
	
	==> storage-provisioner [f4100f775a2579f72d54b5b4a4b52f3ddc7f27c5e211eed8d07a0448b040ea13] <==
	I0127 12:13:32.289155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 12:13:32.369587       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 12:13:32.369664       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 12:13:32.504042       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 12:13:32.505644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31841451-85c0-4f53-9ccc-ce15d4681ace", APIVersion:"v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-645690_2c7aead1-3f24-4833-b71b-4afbd4b43d99 became leader
	I0127 12:13:32.507038       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-645690_2c7aead1-3f24-4833-b71b-4afbd4b43d99!
	I0127 12:13:32.945586       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-645690_2c7aead1-3f24-4833-b71b-4afbd4b43d99!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-645690 -n addons-645690
helpers_test.go:261: (dbg) Run:  kubectl --context addons-645690 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-hgq2b ingress-nginx-admission-create-cc4sg ingress-nginx-admission-patch-k49p8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-645690 describe pod hello-world-app-7d9564db4-hgq2b ingress-nginx-admission-create-cc4sg ingress-nginx-admission-patch-k49p8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-645690 describe pod hello-world-app-7d9564db4-hgq2b ingress-nginx-admission-create-cc4sg ingress-nginx-admission-patch-k49p8: exit status 1 (66.000443ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-hgq2b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-645690/192.168.39.68
	Start Time:       Mon, 27 Jan 2025 12:19:35 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cfl5n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cfl5n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-hgq2b to addons-645690
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cc4sg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k49p8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-645690 describe pod hello-world-app-7d9564db4-hgq2b ingress-nginx-admission-create-cc4sg ingress-nginx-admission-patch-k49p8: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable ingress-dns --alsologtostderr -v=1: (1.18409833s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable ingress --alsologtostderr -v=1: (7.79099802s)
--- FAIL: TestAddons/parallel/Ingress (156.73s)

                                                
                                    
x
+
TestPreload (196.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-337839 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0127 13:09:39.547266  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-337839 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m46.88164403s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-337839 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-337839 image pull gcr.io/k8s-minikube/busybox: (5.981090832s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-337839
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-337839: (7.287904448s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-337839 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0127 13:11:08.032233  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-337839 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m13.374474702s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-337839 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-27 13:11:53.433743902 +0000 UTC m=+3610.130584713
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-337839 -n test-preload-337839
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-337839 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-337839 logs -n 25: (1.030032559s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-923127 ssh -n                                                                 | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:56 UTC | 27 Jan 25 12:56 UTC |
	|         | multinode-923127-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-923127 ssh -n multinode-923127 sudo cat                                       | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:56 UTC | 27 Jan 25 12:56 UTC |
	|         | /home/docker/cp-test_multinode-923127-m03_multinode-923127.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-923127 cp multinode-923127-m03:/home/docker/cp-test.txt                       | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:56 UTC | 27 Jan 25 12:56 UTC |
	|         | multinode-923127-m02:/home/docker/cp-test_multinode-923127-m03_multinode-923127-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-923127 ssh -n                                                                 | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:56 UTC | 27 Jan 25 12:56 UTC |
	|         | multinode-923127-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-923127 ssh -n multinode-923127-m02 sudo cat                                   | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:56 UTC | 27 Jan 25 12:56 UTC |
	|         | /home/docker/cp-test_multinode-923127-m03_multinode-923127-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-923127 node stop m03                                                          | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:56 UTC | 27 Jan 25 12:56 UTC |
	| node    | multinode-923127 node start                                                             | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:56 UTC | 27 Jan 25 12:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-923127                                                                | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:57 UTC |                     |
	| stop    | -p multinode-923127                                                                     | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 12:57 UTC | 27 Jan 25 13:00 UTC |
	| start   | -p multinode-923127                                                                     | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:00 UTC | 27 Jan 25 13:03 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-923127                                                                | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC |                     |
	| node    | multinode-923127 node delete                                                            | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:03 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-923127 stop                                                                   | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:03 UTC | 27 Jan 25 13:06 UTC |
	| start   | -p multinode-923127                                                                     | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:06 UTC | 27 Jan 25 13:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-923127                                                                | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:07 UTC |                     |
	| start   | -p multinode-923127-m02                                                                 | multinode-923127-m02 | jenkins | v1.35.0 | 27 Jan 25 13:07 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-923127-m03                                                                 | multinode-923127-m03 | jenkins | v1.35.0 | 27 Jan 25 13:07 UTC | 27 Jan 25 13:08 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-923127                                                                 | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:08 UTC |                     |
	| delete  | -p multinode-923127-m03                                                                 | multinode-923127-m03 | jenkins | v1.35.0 | 27 Jan 25 13:08 UTC | 27 Jan 25 13:08 UTC |
	| delete  | -p multinode-923127                                                                     | multinode-923127     | jenkins | v1.35.0 | 27 Jan 25 13:08 UTC | 27 Jan 25 13:08 UTC |
	| start   | -p test-preload-337839                                                                  | test-preload-337839  | jenkins | v1.35.0 | 27 Jan 25 13:08 UTC | 27 Jan 25 13:10 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-337839 image pull                                                          | test-preload-337839  | jenkins | v1.35.0 | 27 Jan 25 13:10 UTC | 27 Jan 25 13:10 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-337839                                                                  | test-preload-337839  | jenkins | v1.35.0 | 27 Jan 25 13:10 UTC | 27 Jan 25 13:10 UTC |
	| start   | -p test-preload-337839                                                                  | test-preload-337839  | jenkins | v1.35.0 | 27 Jan 25 13:10 UTC | 27 Jan 25 13:11 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-337839 image list                                                          | test-preload-337839  | jenkins | v1.35.0 | 27 Jan 25 13:11 UTC | 27 Jan 25 13:11 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:10:39
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:10:39.868780  401264 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:10:39.868873  401264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:10:39.868878  401264 out.go:358] Setting ErrFile to fd 2...
	I0127 13:10:39.868882  401264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:10:39.869065  401264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:10:39.869631  401264 out.go:352] Setting JSON to false
	I0127 13:10:39.870489  401264 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21180,"bootTime":1737962260,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:10:39.870636  401264 start.go:139] virtualization: kvm guest
	I0127 13:10:39.872679  401264 out.go:177] * [test-preload-337839] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:10:39.873854  401264 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:10:39.873889  401264 notify.go:220] Checking for updates...
	I0127 13:10:39.876125  401264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:10:39.877430  401264 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:10:39.878590  401264 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:10:39.879711  401264 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:10:39.880804  401264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:10:39.882418  401264 config.go:182] Loaded profile config "test-preload-337839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 13:10:39.882880  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:10:39.882949  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:10:39.898107  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0127 13:10:39.898483  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:10:39.898994  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:10:39.899014  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:10:39.899359  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:10:39.899575  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:10:39.901233  401264 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 13:10:39.902411  401264 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:10:39.902730  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:10:39.902768  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:10:39.917199  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0127 13:10:39.917655  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:10:39.918072  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:10:39.918089  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:10:39.918404  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:10:39.918576  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:10:39.952419  401264 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:10:39.953644  401264 start.go:297] selected driver: kvm2
	I0127 13:10:39.953657  401264 start.go:901] validating driver "kvm2" against &{Name:test-preload-337839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-337839
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:10:39.953760  401264 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:10:39.954424  401264 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:10:39.954511  401264 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:10:39.968565  401264 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:10:39.968900  401264 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:10:39.968938  401264 cni.go:84] Creating CNI manager for ""
	I0127 13:10:39.969002  401264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:10:39.969069  401264 start.go:340] cluster config:
	{Name:test-preload-337839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-337839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:10:39.969184  401264 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:10:39.970935  401264 out.go:177] * Starting "test-preload-337839" primary control-plane node in "test-preload-337839" cluster
	I0127 13:10:39.972131  401264 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 13:10:40.695724  401264 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 13:10:40.695762  401264 cache.go:56] Caching tarball of preloaded images
	I0127 13:10:40.695957  401264 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 13:10:40.697871  401264 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0127 13:10:40.699164  401264 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 13:10:40.856325  401264 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0127 13:10:57.032914  401264 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 13:10:57.033035  401264 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0127 13:10:57.904876  401264 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0127 13:10:57.905055  401264 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/config.json ...
	I0127 13:10:57.905322  401264 start.go:360] acquireMachinesLock for test-preload-337839: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:10:57.905417  401264 start.go:364] duration metric: took 60.14µs to acquireMachinesLock for "test-preload-337839"
	I0127 13:10:57.905442  401264 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:10:57.905451  401264 fix.go:54] fixHost starting: 
	I0127 13:10:57.905749  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:10:57.905802  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:10:57.921284  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0127 13:10:57.921736  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:10:57.922323  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:10:57.922358  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:10:57.922734  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:10:57.922967  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:10:57.923135  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetState
	I0127 13:10:57.924831  401264 fix.go:112] recreateIfNeeded on test-preload-337839: state=Stopped err=<nil>
	I0127 13:10:57.924868  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	W0127 13:10:57.925026  401264 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:10:57.927450  401264 out.go:177] * Restarting existing kvm2 VM for "test-preload-337839" ...
	I0127 13:10:57.928727  401264 main.go:141] libmachine: (test-preload-337839) Calling .Start
	I0127 13:10:57.928925  401264 main.go:141] libmachine: (test-preload-337839) starting domain...
	I0127 13:10:57.928951  401264 main.go:141] libmachine: (test-preload-337839) ensuring networks are active...
	I0127 13:10:57.929670  401264 main.go:141] libmachine: (test-preload-337839) Ensuring network default is active
	I0127 13:10:57.930069  401264 main.go:141] libmachine: (test-preload-337839) Ensuring network mk-test-preload-337839 is active
	I0127 13:10:57.930381  401264 main.go:141] libmachine: (test-preload-337839) getting domain XML...
	I0127 13:10:57.931034  401264 main.go:141] libmachine: (test-preload-337839) creating domain...
	I0127 13:10:59.118783  401264 main.go:141] libmachine: (test-preload-337839) waiting for IP...
	I0127 13:10:59.119638  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:10:59.119987  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:10:59.120030  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:10:59.119954  401347 retry.go:31] will retry after 292.972706ms: waiting for domain to come up
	I0127 13:10:59.414449  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:10:59.414904  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:10:59.414936  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:10:59.414854  401347 retry.go:31] will retry after 338.88989ms: waiting for domain to come up
	I0127 13:10:59.755436  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:10:59.755890  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:10:59.755908  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:10:59.755872  401347 retry.go:31] will retry after 339.911086ms: waiting for domain to come up
	I0127 13:11:00.097435  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:00.097832  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:00.097862  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:00.097793  401347 retry.go:31] will retry after 552.252736ms: waiting for domain to come up
	I0127 13:11:00.651453  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:00.651901  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:00.651923  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:00.651864  401347 retry.go:31] will retry after 636.848995ms: waiting for domain to come up
	I0127 13:11:01.290773  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:01.291117  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:01.291157  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:01.291089  401347 retry.go:31] will retry after 946.77882ms: waiting for domain to come up
	I0127 13:11:02.239315  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:02.239715  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:02.239747  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:02.239658  401347 retry.go:31] will retry after 963.346382ms: waiting for domain to come up
	I0127 13:11:03.204295  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:03.204689  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:03.204718  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:03.204649  401347 retry.go:31] will retry after 1.433709317s: waiting for domain to come up
	I0127 13:11:04.640203  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:04.640674  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:04.640691  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:04.640644  401347 retry.go:31] will retry after 1.556479699s: waiting for domain to come up
	I0127 13:11:06.198672  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:06.199102  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:06.199129  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:06.199073  401347 retry.go:31] will retry after 2.100911691s: waiting for domain to come up
	I0127 13:11:08.301967  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:08.302338  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:08.302366  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:08.302316  401347 retry.go:31] will retry after 2.490485691s: waiting for domain to come up
	I0127 13:11:10.795907  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:10.796214  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:10.796245  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:10.796199  401347 retry.go:31] will retry after 2.99383472s: waiting for domain to come up
	I0127 13:11:13.791790  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:13.792144  401264 main.go:141] libmachine: (test-preload-337839) DBG | unable to find current IP address of domain test-preload-337839 in network mk-test-preload-337839
	I0127 13:11:13.792174  401264 main.go:141] libmachine: (test-preload-337839) DBG | I0127 13:11:13.792096  401347 retry.go:31] will retry after 3.500774662s: waiting for domain to come up
	I0127 13:11:17.296735  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.297156  401264 main.go:141] libmachine: (test-preload-337839) found domain IP: 192.168.39.80
	I0127 13:11:17.297179  401264 main.go:141] libmachine: (test-preload-337839) reserving static IP address...
	I0127 13:11:17.297191  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has current primary IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.297592  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "test-preload-337839", mac: "52:54:00:d9:5a:cd", ip: "192.168.39.80"} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.297620  401264 main.go:141] libmachine: (test-preload-337839) DBG | skip adding static IP to network mk-test-preload-337839 - found existing host DHCP lease matching {name: "test-preload-337839", mac: "52:54:00:d9:5a:cd", ip: "192.168.39.80"}
	I0127 13:11:17.297631  401264 main.go:141] libmachine: (test-preload-337839) reserved static IP address 192.168.39.80 for domain test-preload-337839
	I0127 13:11:17.297641  401264 main.go:141] libmachine: (test-preload-337839) waiting for SSH...
	I0127 13:11:17.297649  401264 main.go:141] libmachine: (test-preload-337839) DBG | Getting to WaitForSSH function...
	I0127 13:11:17.299657  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.299953  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.299997  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.300115  401264 main.go:141] libmachine: (test-preload-337839) DBG | Using SSH client type: external
	I0127 13:11:17.300138  401264 main.go:141] libmachine: (test-preload-337839) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa (-rw-------)
	I0127 13:11:17.300170  401264 main.go:141] libmachine: (test-preload-337839) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:11:17.300190  401264 main.go:141] libmachine: (test-preload-337839) DBG | About to run SSH command:
	I0127 13:11:17.300204  401264 main.go:141] libmachine: (test-preload-337839) DBG | exit 0
	I0127 13:11:17.426380  401264 main.go:141] libmachine: (test-preload-337839) DBG | SSH cmd err, output: <nil>: 
	I0127 13:11:17.426789  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetConfigRaw
	I0127 13:11:17.427527  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetIP
	I0127 13:11:17.430116  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.430465  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.430500  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.430724  401264 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/config.json ...
	I0127 13:11:17.430946  401264 machine.go:93] provisionDockerMachine start ...
	I0127 13:11:17.430970  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:17.431206  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:17.433598  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.433913  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.433936  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.434058  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:17.434224  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:17.434360  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:17.434450  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:17.434639  401264 main.go:141] libmachine: Using SSH client type: native
	I0127 13:11:17.434860  401264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0127 13:11:17.434875  401264 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:11:17.542589  401264 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:11:17.542615  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetMachineName
	I0127 13:11:17.542874  401264 buildroot.go:166] provisioning hostname "test-preload-337839"
	I0127 13:11:17.542910  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetMachineName
	I0127 13:11:17.543120  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:17.545727  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.546121  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.546160  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.546282  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:17.546479  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:17.546645  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:17.546792  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:17.547004  401264 main.go:141] libmachine: Using SSH client type: native
	I0127 13:11:17.547187  401264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0127 13:11:17.547198  401264 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-337839 && echo "test-preload-337839" | sudo tee /etc/hostname
	I0127 13:11:17.668635  401264 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-337839
	
	I0127 13:11:17.668662  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:17.671354  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.671685  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.671728  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.671906  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:17.672082  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:17.672275  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:17.672397  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:17.672523  401264 main.go:141] libmachine: Using SSH client type: native
	I0127 13:11:17.672738  401264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0127 13:11:17.672758  401264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-337839' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-337839/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-337839' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:11:17.787479  401264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:11:17.787513  401264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:11:17.787531  401264 buildroot.go:174] setting up certificates
	I0127 13:11:17.787544  401264 provision.go:84] configureAuth start
	I0127 13:11:17.787552  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetMachineName
	I0127 13:11:17.787834  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetIP
	I0127 13:11:17.790460  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.790772  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.790806  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.790941  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:17.792914  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.793245  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:17.793287  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:17.793401  401264 provision.go:143] copyHostCerts
	I0127 13:11:17.793468  401264 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:11:17.793479  401264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:11:17.793542  401264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:11:17.793636  401264 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:11:17.793644  401264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:11:17.793669  401264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:11:17.793732  401264 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:11:17.793739  401264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:11:17.793760  401264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:11:17.793825  401264 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.test-preload-337839 san=[127.0.0.1 192.168.39.80 localhost minikube test-preload-337839]
	I0127 13:11:18.050596  401264 provision.go:177] copyRemoteCerts
	I0127 13:11:18.050657  401264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:11:18.050685  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:18.053527  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.053881  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:18.053905  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.054079  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:18.054240  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.054375  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:18.054485  401264 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa Username:docker}
	I0127 13:11:18.135883  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:11:18.159658  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 13:11:18.182471  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 13:11:18.204859  401264 provision.go:87] duration metric: took 417.304757ms to configureAuth
	I0127 13:11:18.204886  401264 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:11:18.205032  401264 config.go:182] Loaded profile config "test-preload-337839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 13:11:18.205107  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:18.207851  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.208211  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:18.208253  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.208422  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:18.208580  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.208787  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.208910  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:18.209114  401264 main.go:141] libmachine: Using SSH client type: native
	I0127 13:11:18.209311  401264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0127 13:11:18.209340  401264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:11:18.435577  401264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:11:18.435620  401264 machine.go:96] duration metric: took 1.004657222s to provisionDockerMachine
	I0127 13:11:18.435653  401264 start.go:293] postStartSetup for "test-preload-337839" (driver="kvm2")
	I0127 13:11:18.435671  401264 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:11:18.435701  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:18.436026  401264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:11:18.436071  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:18.438713  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.439139  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:18.439177  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.439281  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:18.439490  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.439679  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:18.439811  401264 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa Username:docker}
	I0127 13:11:18.524840  401264 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:11:18.529021  401264 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:11:18.529042  401264 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:11:18.529099  401264 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:11:18.529182  401264 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:11:18.529269  401264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:11:18.538076  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:11:18.561251  401264 start.go:296] duration metric: took 125.583746ms for postStartSetup
	I0127 13:11:18.561287  401264 fix.go:56] duration metric: took 20.655837809s for fixHost
	I0127 13:11:18.561308  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:18.564054  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.564387  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:18.564410  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.564593  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:18.564781  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.564950  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.565057  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:18.565191  401264 main.go:141] libmachine: Using SSH client type: native
	I0127 13:11:18.565396  401264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.80 22 <nil> <nil>}
	I0127 13:11:18.565409  401264 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:11:18.670864  401264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737983478.647865209
	
	I0127 13:11:18.670895  401264 fix.go:216] guest clock: 1737983478.647865209
	I0127 13:11:18.670903  401264 fix.go:229] Guest: 2025-01-27 13:11:18.647865209 +0000 UTC Remote: 2025-01-27 13:11:18.561290734 +0000 UTC m=+38.729823823 (delta=86.574475ms)
	I0127 13:11:18.670922  401264 fix.go:200] guest clock delta is within tolerance: 86.574475ms
	I0127 13:11:18.670927  401264 start.go:83] releasing machines lock for "test-preload-337839", held for 20.765496546s
	I0127 13:11:18.670944  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:18.671202  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetIP
	I0127 13:11:18.673866  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.674205  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:18.674226  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.674429  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:18.674892  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:18.675063  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:18.675172  401264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:11:18.675217  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:18.675260  401264 ssh_runner.go:195] Run: cat /version.json
	I0127 13:11:18.675287  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:18.677779  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.677806  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.678107  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:18.678148  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.678274  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:18.678367  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:18.678393  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:18.678446  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.678596  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:18.678606  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:18.678769  401264 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa Username:docker}
	I0127 13:11:18.678800  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:18.678954  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:18.679079  401264 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa Username:docker}
	I0127 13:11:18.755526  401264 ssh_runner.go:195] Run: systemctl --version
	I0127 13:11:18.780827  401264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:11:18.920927  401264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:11:18.927556  401264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:11:18.927633  401264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:11:18.943284  401264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:11:18.943312  401264 start.go:495] detecting cgroup driver to use...
	I0127 13:11:18.943376  401264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:11:18.958906  401264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:11:18.971995  401264 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:11:18.972049  401264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:11:18.984478  401264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:11:18.996933  401264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:11:19.104857  401264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:11:19.231072  401264 docker.go:233] disabling docker service ...
	I0127 13:11:19.231152  401264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:11:19.247319  401264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:11:19.260194  401264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:11:19.384160  401264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:11:19.491067  401264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:11:19.504505  401264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:11:19.522496  401264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0127 13:11:19.522559  401264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:11:19.532613  401264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:11:19.532660  401264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:11:19.542658  401264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:11:19.552572  401264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:11:19.562635  401264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:11:19.572986  401264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:11:19.582831  401264 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:11:19.599571  401264 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:11:19.609624  401264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:11:19.618498  401264 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:11:19.618560  401264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:11:19.631027  401264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:11:19.640049  401264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:11:19.742739  401264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:11:19.827938  401264 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:11:19.828023  401264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:11:19.832985  401264 start.go:563] Will wait 60s for crictl version
	I0127 13:11:19.833027  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:19.836781  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:11:19.879098  401264 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:11:19.879191  401264 ssh_runner.go:195] Run: crio --version
	I0127 13:11:19.907225  401264 ssh_runner.go:195] Run: crio --version
	I0127 13:11:19.938638  401264 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0127 13:11:19.940048  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetIP
	I0127 13:11:19.942913  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:19.943264  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:19.943290  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:19.943461  401264 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 13:11:19.947611  401264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:11:19.959864  401264 kubeadm.go:883] updating cluster {Name:test-preload-337839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-337839 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:11:19.959967  401264 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0127 13:11:19.960014  401264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:11:19.996249  401264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 13:11:19.996316  401264 ssh_runner.go:195] Run: which lz4
	I0127 13:11:20.000424  401264 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:11:20.004757  401264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:11:20.004791  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0127 13:11:21.533569  401264 crio.go:462] duration metric: took 1.533171977s to copy over tarball
	I0127 13:11:21.533639  401264 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:11:23.866349  401264 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.332683328s)
	I0127 13:11:23.866378  401264 crio.go:469] duration metric: took 2.332780922s to extract the tarball
	I0127 13:11:23.866385  401264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:11:23.908407  401264 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:11:23.951486  401264 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0127 13:11:23.951517  401264 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 13:11:23.951627  401264 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:11:23.951659  401264 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:11:23.951666  401264 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:11:23.951699  401264 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:11:23.951710  401264 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 13:11:23.951631  401264 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:11:23.951632  401264 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:11:23.951628  401264 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:11:23.953398  401264 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:11:23.953411  401264 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:11:23.953427  401264 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:11:23.953430  401264 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:11:23.953398  401264 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:11:23.953447  401264 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 13:11:23.953401  401264 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:11:23.953463  401264 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:11:24.127149  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:11:24.151049  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:11:24.166854  401264 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0127 13:11:24.166908  401264 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:11:24.166955  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:24.198274  401264 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0127 13:11:24.198311  401264 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:11:24.198359  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:24.198366  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:11:24.202863  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:11:24.242424  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:11:24.249598  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:11:24.275857  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:11:24.281895  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 13:11:24.282275  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:11:24.283119  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 13:11:24.291587  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0127 13:11:24.297712  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0127 13:11:24.298870  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:11:24.447207  401264 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 13:11:24.447271  401264 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:11:24.447329  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:24.463406  401264 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0127 13:11:24.463454  401264 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:11:24.463470  401264 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0127 13:11:24.463484  401264 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 13:11:24.463507  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:24.463513  401264 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:11:24.463513  401264 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 13:11:24.463553  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:24.463553  401264 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0127 13:11:24.463554  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:24.463630  401264 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0127 13:11:24.463690  401264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 13:11:24.463712  401264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 13:11:24.482755  401264 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0127 13:11:24.482796  401264 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:11:24.482817  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:11:24.482824  401264 ssh_runner.go:195] Run: which crictl
	I0127 13:11:24.482866  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:11:24.482917  401264 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0127 13:11:24.482939  401264 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 13:11:24.482967  401264 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0127 13:11:24.483017  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:11:24.483032  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:11:24.483038  401264 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0127 13:11:24.559974  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:11:24.560115  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:11:26.192199  401264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:11:27.906204  401264 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.423199491s)
	I0127 13:11:27.906252  401264 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0127 13:11:27.906270  401264 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.423373356s)
	I0127 13:11:27.906339  401264 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.423292207s)
	I0127 13:11:27.906361  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:11:27.906279  401264 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 13:11:27.906403  401264 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0127 13:11:27.906418  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:11:27.906477  401264 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.423436818s)
	I0127 13:11:27.906554  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:11:27.906579  401264 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.346444527s)
	I0127 13:11:27.906628  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:11:27.906628  401264 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.34661665s)
	I0127 13:11:27.906686  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:11:27.906698  401264 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.714470308s)
	I0127 13:11:28.852851  401264 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0127 13:11:28.852877  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0127 13:11:28.853032  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0127 13:11:28.853150  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:11:28.853221  401264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:11:28.902876  401264 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 13:11:28.902881  401264 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0127 13:11:28.903005  401264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 13:11:28.903015  401264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:11:28.935213  401264 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0127 13:11:28.935343  401264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 13:11:28.935442  401264 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 13:11:28.935529  401264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:11:28.947544  401264 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0127 13:11:28.947618  401264 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:11:28.947684  401264 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:11:28.947571  401264 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0127 13:11:28.947571  401264 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0127 13:11:28.947582  401264 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0127 13:11:28.947590  401264 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0127 13:11:28.947781  401264 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 13:11:29.290149  401264 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 13:11:29.290196  401264 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:11:29.290246  401264 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:11:29.290257  401264 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0127 13:11:31.339145  401264 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.048867305s)
	I0127 13:11:31.339188  401264 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 13:11:31.339221  401264 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 13:11:31.339275  401264 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0127 13:11:32.080289  401264 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0127 13:11:32.080356  401264 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 13:11:32.080449  401264 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0127 13:11:32.528586  401264 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0127 13:11:32.528664  401264 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 13:11:32.528746  401264 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 13:11:32.675360  401264 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0127 13:11:32.675406  401264 cache_images.go:123] Successfully loaded all cached images
	I0127 13:11:32.675411  401264 cache_images.go:92] duration metric: took 8.723882977s to LoadCachedImages
	I0127 13:11:32.675426  401264 kubeadm.go:934] updating node { 192.168.39.80 8443 v1.24.4 crio true true} ...
	I0127 13:11:32.675610  401264 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-337839 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-337839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:11:32.675700  401264 ssh_runner.go:195] Run: crio config
	I0127 13:11:32.721383  401264 cni.go:84] Creating CNI manager for ""
	I0127 13:11:32.721406  401264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:11:32.721427  401264 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:11:32.721454  401264 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.80 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-337839 NodeName:test-preload-337839 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:11:32.721632  401264 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-337839"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:11:32.721723  401264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0127 13:11:32.732064  401264 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:11:32.732130  401264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:11:32.741693  401264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0127 13:11:32.757648  401264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:11:32.773436  401264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0127 13:11:32.790272  401264 ssh_runner.go:195] Run: grep 192.168.39.80	control-plane.minikube.internal$ /etc/hosts
	I0127 13:11:32.794396  401264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:11:32.806920  401264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:11:32.913099  401264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:11:32.929555  401264 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839 for IP: 192.168.39.80
	I0127 13:11:32.929579  401264 certs.go:194] generating shared ca certs ...
	I0127 13:11:32.929599  401264 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:11:32.929835  401264 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:11:32.929885  401264 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:11:32.929896  401264 certs.go:256] generating profile certs ...
	I0127 13:11:32.929990  401264 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/client.key
	I0127 13:11:32.930052  401264 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/apiserver.key.a4bf3a3f
	I0127 13:11:32.930107  401264 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/proxy-client.key
	I0127 13:11:32.930262  401264 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:11:32.930302  401264 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:11:32.930316  401264 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:11:32.930339  401264 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:11:32.930368  401264 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:11:32.930392  401264 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:11:32.930435  401264 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:11:32.931174  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:11:32.982306  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:11:33.024196  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:11:33.062493  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:11:33.102688  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 13:11:33.146741  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:11:33.176357  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:11:33.201333  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 13:11:33.226073  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:11:33.248948  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:11:33.271205  401264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:11:33.293028  401264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:11:33.308879  401264 ssh_runner.go:195] Run: openssl version
	I0127 13:11:33.315035  401264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:11:33.325505  401264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:11:33.329859  401264 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:11:33.329920  401264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:11:33.335415  401264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:11:33.346147  401264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:11:33.356844  401264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:11:33.361002  401264 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:11:33.361047  401264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:11:33.366385  401264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:11:33.377012  401264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:11:33.387562  401264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:11:33.391794  401264 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:11:33.391832  401264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:11:33.397296  401264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:11:33.408139  401264 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:11:33.412646  401264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:11:33.418554  401264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:11:33.424231  401264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:11:33.429995  401264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:11:33.435526  401264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:11:33.441079  401264 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:11:33.446623  401264 kubeadm.go:392] StartCluster: {Name:test-preload-337839 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-337839 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:11:33.446727  401264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:11:33.446783  401264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:11:33.484853  401264 cri.go:89] found id: ""
	I0127 13:11:33.484911  401264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:11:33.495011  401264 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:11:33.495029  401264 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:11:33.495069  401264 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:11:33.504665  401264 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:11:33.505145  401264 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-337839" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:11:33.505266  401264 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-337839" cluster setting kubeconfig missing "test-preload-337839" context setting]
	I0127 13:11:33.505588  401264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:11:33.506223  401264 kapi.go:59] client config for test-preload-337839: &rest.Config{Host:"https://192.168.39.80:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/client.key", CAFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 13:11:33.507003  401264 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:11:33.516213  401264 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.80
	I0127 13:11:33.516243  401264 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:11:33.516254  401264 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:11:33.516291  401264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:11:33.554620  401264 cri.go:89] found id: ""
	I0127 13:11:33.554675  401264 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:11:33.569747  401264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:11:33.579537  401264 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:11:33.579564  401264 kubeadm.go:157] found existing configuration files:
	
	I0127 13:11:33.579622  401264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:11:33.588723  401264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:11:33.588766  401264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:11:33.598047  401264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:11:33.606966  401264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:11:33.607005  401264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:11:33.616387  401264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:11:33.625430  401264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:11:33.625469  401264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:11:33.634532  401264 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:11:33.643418  401264 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:11:33.643456  401264 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:11:33.652612  401264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:11:33.661955  401264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:11:33.756116  401264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:11:34.898271  401264 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.142115179s)
	I0127 13:11:34.898305  401264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:11:35.144974  401264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:11:35.224555  401264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:11:35.302934  401264 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:11:35.303037  401264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:11:35.804117  401264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:11:36.303777  401264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:11:36.333394  401264 api_server.go:72] duration metric: took 1.030458778s to wait for apiserver process to appear ...
	I0127 13:11:36.333430  401264 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:11:36.333456  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:36.334006  401264 api_server.go:269] stopped: https://192.168.39.80:8443/healthz: Get "https://192.168.39.80:8443/healthz": dial tcp 192.168.39.80:8443: connect: connection refused
	I0127 13:11:36.833652  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:40.260597  401264 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:11:40.260629  401264 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:11:40.260650  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:40.287137  401264 api_server.go:279] https://192.168.39.80:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:11:40.287171  401264 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:11:40.334417  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:40.354330  401264 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:11:40.354367  401264 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:11:40.834077  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:40.839631  401264 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:11:40.839669  401264 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:11:41.334215  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:41.343443  401264 api_server.go:279] https://192.168.39.80:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:11:41.343482  401264 api_server.go:103] status: https://192.168.39.80:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:11:41.834167  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:41.842034  401264 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0127 13:11:41.849393  401264 api_server.go:141] control plane version: v1.24.4
	I0127 13:11:41.849421  401264 api_server.go:131] duration metric: took 5.515978049s to wait for apiserver health ...
	I0127 13:11:41.849438  401264 cni.go:84] Creating CNI manager for ""
	I0127 13:11:41.849449  401264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:11:41.851435  401264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:11:41.852676  401264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:11:41.862987  401264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:11:41.881360  401264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:11:41.881454  401264 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 13:11:41.881472  401264 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 13:11:41.890231  401264 system_pods.go:59] 7 kube-system pods found
	I0127 13:11:41.890260  401264 system_pods.go:61] "coredns-6d4b75cb6d-fjhjv" [7cf06703-e2ec-43bd-b40a-122a18d8ac74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:11:41.890266  401264 system_pods.go:61] "etcd-test-preload-337839" [efc22b5a-015c-4809-8da6-cd5c51b5913a] Running
	I0127 13:11:41.890274  401264 system_pods.go:61] "kube-apiserver-test-preload-337839" [0e438d3d-e81f-47e2-9635-3523e75297ba] Running
	I0127 13:11:41.890281  401264 system_pods.go:61] "kube-controller-manager-test-preload-337839" [200f7063-6675-4420-9fa2-1212f5eae99c] Running
	I0127 13:11:41.890288  401264 system_pods.go:61] "kube-proxy-xst5f" [a663f4a2-97a2-47d8-a72c-e03d3ced8d22] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:11:41.890298  401264 system_pods.go:61] "kube-scheduler-test-preload-337839" [85ad3f6c-70d4-4011-b95e-4c7c54377a58] Running
	I0127 13:11:41.890304  401264 system_pods.go:61] "storage-provisioner" [988e3aaa-e418-4ffc-bf99-37bca5a1ae34] Running
	I0127 13:11:41.890315  401264 system_pods.go:74] duration metric: took 8.931681ms to wait for pod list to return data ...
	I0127 13:11:41.890325  401264 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:11:41.902760  401264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:11:41.902785  401264 node_conditions.go:123] node cpu capacity is 2
	I0127 13:11:41.902812  401264 node_conditions.go:105] duration metric: took 12.480267ms to run NodePressure ...
	I0127 13:11:41.902844  401264 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:11:42.083415  401264 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 13:11:42.090768  401264 retry.go:31] will retry after 154.333601ms: kubelet not initialised
	I0127 13:11:42.251478  401264 retry.go:31] will retry after 367.299133ms: kubelet not initialised
	I0127 13:11:42.625629  401264 retry.go:31] will retry after 729.449722ms: kubelet not initialised
	I0127 13:11:43.363712  401264 retry.go:31] will retry after 796.099047ms: kubelet not initialised
	I0127 13:11:44.166832  401264 retry.go:31] will retry after 1.12439602s: kubelet not initialised
	I0127 13:11:45.297666  401264 retry.go:31] will retry after 2.409984907s: kubelet not initialised
	I0127 13:11:47.716405  401264 kubeadm.go:739] kubelet initialised
	I0127 13:11:47.716429  401264 kubeadm.go:740] duration metric: took 5.632988357s waiting for restarted kubelet to initialise ...
	I0127 13:11:47.716439  401264 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:11:47.722823  401264 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-fjhjv" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:47.727457  401264 pod_ready.go:98] node "test-preload-337839" hosting pod "coredns-6d4b75cb6d-fjhjv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.727476  401264 pod_ready.go:82] duration metric: took 4.63164ms for pod "coredns-6d4b75cb6d-fjhjv" in "kube-system" namespace to be "Ready" ...
	E0127 13:11:47.727485  401264 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-337839" hosting pod "coredns-6d4b75cb6d-fjhjv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.727491  401264 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:47.732080  401264 pod_ready.go:98] node "test-preload-337839" hosting pod "etcd-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.732098  401264 pod_ready.go:82] duration metric: took 4.597681ms for pod "etcd-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	E0127 13:11:47.732105  401264 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-337839" hosting pod "etcd-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.732110  401264 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:47.735985  401264 pod_ready.go:98] node "test-preload-337839" hosting pod "kube-apiserver-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.736002  401264 pod_ready.go:82] duration metric: took 3.877608ms for pod "kube-apiserver-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	E0127 13:11:47.736010  401264 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-337839" hosting pod "kube-apiserver-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.736015  401264 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:47.739981  401264 pod_ready.go:98] node "test-preload-337839" hosting pod "kube-controller-manager-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.740004  401264 pod_ready.go:82] duration metric: took 3.982102ms for pod "kube-controller-manager-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	E0127 13:11:47.740012  401264 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-337839" hosting pod "kube-controller-manager-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:47.740018  401264 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xst5f" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:48.114213  401264 pod_ready.go:98] node "test-preload-337839" hosting pod "kube-proxy-xst5f" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:48.114245  401264 pod_ready.go:82] duration metric: took 374.216024ms for pod "kube-proxy-xst5f" in "kube-system" namespace to be "Ready" ...
	E0127 13:11:48.114258  401264 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-337839" hosting pod "kube-proxy-xst5f" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:48.114265  401264 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:48.514901  401264 pod_ready.go:98] node "test-preload-337839" hosting pod "kube-scheduler-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:48.514938  401264 pod_ready.go:82] duration metric: took 400.664434ms for pod "kube-scheduler-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	E0127 13:11:48.514952  401264 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-337839" hosting pod "kube-scheduler-test-preload-337839" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-337839" has status "Ready":"False"
	I0127 13:11:48.514962  401264 pod_ready.go:39] duration metric: took 798.506368ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:11:48.514987  401264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:11:48.528486  401264 ops.go:34] apiserver oom_adj: -16
	I0127 13:11:48.528510  401264 kubeadm.go:597] duration metric: took 15.033473534s to restartPrimaryControlPlane
	I0127 13:11:48.528521  401264 kubeadm.go:394] duration metric: took 15.081903553s to StartCluster
	I0127 13:11:48.528544  401264 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:11:48.528623  401264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:11:48.529491  401264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:11:48.529789  401264 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:11:48.529883  401264 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:11:48.529960  401264 config.go:182] Loaded profile config "test-preload-337839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0127 13:11:48.530001  401264 addons.go:69] Setting storage-provisioner=true in profile "test-preload-337839"
	I0127 13:11:48.530023  401264 addons.go:238] Setting addon storage-provisioner=true in "test-preload-337839"
	W0127 13:11:48.530038  401264 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:11:48.530040  401264 addons.go:69] Setting default-storageclass=true in profile "test-preload-337839"
	I0127 13:11:48.530069  401264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-337839"
	I0127 13:11:48.530085  401264 host.go:66] Checking if "test-preload-337839" exists ...
	I0127 13:11:48.530504  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:11:48.530532  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:11:48.530564  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:11:48.530609  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:11:48.531334  401264 out.go:177] * Verifying Kubernetes components...
	I0127 13:11:48.532785  401264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:11:48.546311  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0127 13:11:48.546500  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0127 13:11:48.546774  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:11:48.546985  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:11:48.547302  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:11:48.547316  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:11:48.547457  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:11:48.547480  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:11:48.547676  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:11:48.547831  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:11:48.547892  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetState
	I0127 13:11:48.548406  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:11:48.548459  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:11:48.550615  401264 kapi.go:59] client config for test-preload-337839: &rest.Config{Host:"https://192.168.39.80:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/profiles/test-preload-337839/client.key", CAFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 13:11:48.550988  401264 addons.go:238] Setting addon default-storageclass=true in "test-preload-337839"
	W0127 13:11:48.551011  401264 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:11:48.551042  401264 host.go:66] Checking if "test-preload-337839" exists ...
	I0127 13:11:48.551419  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:11:48.551465  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:11:48.564488  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0127 13:11:48.565000  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:11:48.565556  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:11:48.565582  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:11:48.565966  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:11:48.566192  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetState
	I0127 13:11:48.566718  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0127 13:11:48.567349  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:11:48.567914  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:11:48.567938  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:11:48.568028  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:48.568330  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:11:48.568934  401264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:11:48.568978  401264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:11:48.569902  401264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:11:48.571242  401264 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:11:48.571261  401264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:11:48.571276  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:48.573971  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:48.574362  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:48.574387  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:48.574563  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:48.574758  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:48.574899  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:48.575046  401264 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa Username:docker}
	I0127 13:11:48.602533  401264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0127 13:11:48.602973  401264 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:11:48.603466  401264 main.go:141] libmachine: Using API Version  1
	I0127 13:11:48.603490  401264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:11:48.603887  401264 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:11:48.604100  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetState
	I0127 13:11:48.605928  401264 main.go:141] libmachine: (test-preload-337839) Calling .DriverName
	I0127 13:11:48.606153  401264 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:11:48.606169  401264 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:11:48.606185  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHHostname
	I0127 13:11:48.609122  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:48.609577  401264 main.go:141] libmachine: (test-preload-337839) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:5a:cd", ip: ""} in network mk-test-preload-337839: {Iface:virbr1 ExpiryTime:2025-01-27 14:08:55 +0000 UTC Type:0 Mac:52:54:00:d9:5a:cd Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:test-preload-337839 Clientid:01:52:54:00:d9:5a:cd}
	I0127 13:11:48.609598  401264 main.go:141] libmachine: (test-preload-337839) DBG | domain test-preload-337839 has defined IP address 192.168.39.80 and MAC address 52:54:00:d9:5a:cd in network mk-test-preload-337839
	I0127 13:11:48.609791  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHPort
	I0127 13:11:48.609983  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHKeyPath
	I0127 13:11:48.610138  401264 main.go:141] libmachine: (test-preload-337839) Calling .GetSSHUsername
	I0127 13:11:48.610282  401264 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/test-preload-337839/id_rsa Username:docker}
	I0127 13:11:48.728087  401264 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:11:48.744726  401264 node_ready.go:35] waiting up to 6m0s for node "test-preload-337839" to be "Ready" ...
	I0127 13:11:48.834678  401264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:11:48.843989  401264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:11:49.863161  401264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.019136906s)
	I0127 13:11:49.863230  401264 main.go:141] libmachine: Making call to close driver server
	I0127 13:11:49.863243  401264 main.go:141] libmachine: (test-preload-337839) Calling .Close
	I0127 13:11:49.863436  401264 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.028720094s)
	I0127 13:11:49.863487  401264 main.go:141] libmachine: Making call to close driver server
	I0127 13:11:49.863506  401264 main.go:141] libmachine: (test-preload-337839) Calling .Close
	I0127 13:11:49.863538  401264 main.go:141] libmachine: (test-preload-337839) DBG | Closing plugin on server side
	I0127 13:11:49.863609  401264 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:11:49.863631  401264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:11:49.863645  401264 main.go:141] libmachine: Making call to close driver server
	I0127 13:11:49.863654  401264 main.go:141] libmachine: (test-preload-337839) Calling .Close
	I0127 13:11:49.863732  401264 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:11:49.863745  401264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:11:49.863759  401264 main.go:141] libmachine: Making call to close driver server
	I0127 13:11:49.863770  401264 main.go:141] libmachine: (test-preload-337839) Calling .Close
	I0127 13:11:49.863863  401264 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:11:49.863903  401264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:11:49.863946  401264 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:11:49.863965  401264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:11:49.863977  401264 main.go:141] libmachine: (test-preload-337839) DBG | Closing plugin on server side
	I0127 13:11:49.863898  401264 main.go:141] libmachine: (test-preload-337839) DBG | Closing plugin on server side
	I0127 13:11:49.870161  401264 main.go:141] libmachine: Making call to close driver server
	I0127 13:11:49.870176  401264 main.go:141] libmachine: (test-preload-337839) Calling .Close
	I0127 13:11:49.870382  401264 main.go:141] libmachine: (test-preload-337839) DBG | Closing plugin on server side
	I0127 13:11:49.870427  401264 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:11:49.870444  401264 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:11:49.871964  401264 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 13:11:49.873222  401264 addons.go:514] duration metric: took 1.343354299s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 13:11:50.748625  401264 node_ready.go:49] node "test-preload-337839" has status "Ready":"True"
	I0127 13:11:50.748652  401264 node_ready.go:38] duration metric: took 2.00389693s for node "test-preload-337839" to be "Ready" ...
	I0127 13:11:50.748663  401264 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:11:50.753487  401264 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-fjhjv" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:50.759922  401264 pod_ready.go:93] pod "coredns-6d4b75cb6d-fjhjv" in "kube-system" namespace has status "Ready":"True"
	I0127 13:11:50.759941  401264 pod_ready.go:82] duration metric: took 6.432152ms for pod "coredns-6d4b75cb6d-fjhjv" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:50.759949  401264 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:50.765061  401264 pod_ready.go:93] pod "etcd-test-preload-337839" in "kube-system" namespace has status "Ready":"True"
	I0127 13:11:50.765080  401264 pod_ready.go:82] duration metric: took 5.12512ms for pod "etcd-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:50.765087  401264 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:50.913088  401264 pod_ready.go:93] pod "kube-apiserver-test-preload-337839" in "kube-system" namespace has status "Ready":"True"
	I0127 13:11:50.913117  401264 pod_ready.go:82] duration metric: took 148.022102ms for pod "kube-apiserver-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:50.913132  401264 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:51.312771  401264 pod_ready.go:93] pod "kube-controller-manager-test-preload-337839" in "kube-system" namespace has status "Ready":"True"
	I0127 13:11:51.312797  401264 pod_ready.go:82] duration metric: took 399.657803ms for pod "kube-controller-manager-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:51.312808  401264 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xst5f" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:51.715025  401264 pod_ready.go:93] pod "kube-proxy-xst5f" in "kube-system" namespace has status "Ready":"True"
	I0127 13:11:51.715049  401264 pod_ready.go:82] duration metric: took 402.234408ms for pod "kube-proxy-xst5f" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:51.715059  401264 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:52.113329  401264 pod_ready.go:93] pod "kube-scheduler-test-preload-337839" in "kube-system" namespace has status "Ready":"True"
	I0127 13:11:52.113353  401264 pod_ready.go:82] duration metric: took 398.287135ms for pod "kube-scheduler-test-preload-337839" in "kube-system" namespace to be "Ready" ...
	I0127 13:11:52.113365  401264 pod_ready.go:39] duration metric: took 1.364689394s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:11:52.113384  401264 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:11:52.113462  401264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:11:52.127938  401264 api_server.go:72] duration metric: took 3.598106002s to wait for apiserver process to appear ...
	I0127 13:11:52.127962  401264 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:11:52.127979  401264 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I0127 13:11:52.133294  401264 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I0127 13:11:52.134563  401264 api_server.go:141] control plane version: v1.24.4
	I0127 13:11:52.134582  401264 api_server.go:131] duration metric: took 6.614207ms to wait for apiserver health ...
	I0127 13:11:52.134592  401264 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:11:52.316638  401264 system_pods.go:59] 7 kube-system pods found
	I0127 13:11:52.316664  401264 system_pods.go:61] "coredns-6d4b75cb6d-fjhjv" [7cf06703-e2ec-43bd-b40a-122a18d8ac74] Running
	I0127 13:11:52.316669  401264 system_pods.go:61] "etcd-test-preload-337839" [efc22b5a-015c-4809-8da6-cd5c51b5913a] Running
	I0127 13:11:52.316673  401264 system_pods.go:61] "kube-apiserver-test-preload-337839" [0e438d3d-e81f-47e2-9635-3523e75297ba] Running
	I0127 13:11:52.316677  401264 system_pods.go:61] "kube-controller-manager-test-preload-337839" [200f7063-6675-4420-9fa2-1212f5eae99c] Running
	I0127 13:11:52.316680  401264 system_pods.go:61] "kube-proxy-xst5f" [a663f4a2-97a2-47d8-a72c-e03d3ced8d22] Running
	I0127 13:11:52.316683  401264 system_pods.go:61] "kube-scheduler-test-preload-337839" [85ad3f6c-70d4-4011-b95e-4c7c54377a58] Running
	I0127 13:11:52.316686  401264 system_pods.go:61] "storage-provisioner" [988e3aaa-e418-4ffc-bf99-37bca5a1ae34] Running
	I0127 13:11:52.316693  401264 system_pods.go:74] duration metric: took 182.094184ms to wait for pod list to return data ...
	I0127 13:11:52.316700  401264 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:11:52.518961  401264 default_sa.go:45] found service account: "default"
	I0127 13:11:52.518986  401264 default_sa.go:55] duration metric: took 202.278227ms for default service account to be created ...
	I0127 13:11:52.518996  401264 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:11:52.716386  401264 system_pods.go:87] 7 kube-system pods found
	I0127 13:11:52.915997  401264 system_pods.go:105] "coredns-6d4b75cb6d-fjhjv" [7cf06703-e2ec-43bd-b40a-122a18d8ac74] Running
	I0127 13:11:52.916035  401264 system_pods.go:105] "etcd-test-preload-337839" [efc22b5a-015c-4809-8da6-cd5c51b5913a] Running
	I0127 13:11:52.916044  401264 system_pods.go:105] "kube-apiserver-test-preload-337839" [0e438d3d-e81f-47e2-9635-3523e75297ba] Running
	I0127 13:11:52.916050  401264 system_pods.go:105] "kube-controller-manager-test-preload-337839" [200f7063-6675-4420-9fa2-1212f5eae99c] Running
	I0127 13:11:52.916056  401264 system_pods.go:105] "kube-proxy-xst5f" [a663f4a2-97a2-47d8-a72c-e03d3ced8d22] Running
	I0127 13:11:52.916062  401264 system_pods.go:105] "kube-scheduler-test-preload-337839" [85ad3f6c-70d4-4011-b95e-4c7c54377a58] Running
	I0127 13:11:52.916069  401264 system_pods.go:105] "storage-provisioner" [988e3aaa-e418-4ffc-bf99-37bca5a1ae34] Running
	I0127 13:11:52.916081  401264 system_pods.go:147] duration metric: took 397.077377ms to wait for k8s-apps to be running ...
	I0127 13:11:52.916095  401264 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 13:11:52.916152  401264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:11:52.930561  401264 system_svc.go:56] duration metric: took 14.455562ms WaitForService to wait for kubelet
	I0127 13:11:52.930591  401264 kubeadm.go:582] duration metric: took 4.400763187s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:11:52.930608  401264 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:11:53.112738  401264 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:11:53.112761  401264 node_conditions.go:123] node cpu capacity is 2
	I0127 13:11:53.112775  401264 node_conditions.go:105] duration metric: took 182.161499ms to run NodePressure ...
	I0127 13:11:53.112789  401264 start.go:241] waiting for startup goroutines ...
	I0127 13:11:53.112799  401264 start.go:246] waiting for cluster config update ...
	I0127 13:11:53.112814  401264 start.go:255] writing updated cluster config ...
	I0127 13:11:53.113117  401264 ssh_runner.go:195] Run: rm -f paused
	I0127 13:11:53.160874  401264 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0127 13:11:53.162814  401264 out.go:201] 
	W0127 13:11:53.163992  401264 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0127 13:11:53.165048  401264 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0127 13:11:53.166112  401264 out.go:177] * Done! kubectl is now configured to use "test-preload-337839" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.064996243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983514064974529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=837f8df2-8604-4cbe-ad6a-62e8c6334f2d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.067936880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=542486a7-e422-40c9-8e15-02fca414b7f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.067984122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=542486a7-e422-40c9-8e15-02fca414b7f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.068233681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893420bef0e9537121d27415c1c61ab82a1e9155595baff8d2b88aa75456aa0,PodSandboxId:a1b83e54412d5aef6b067a00ed416067131b4fefff05a6c80c7a240e9b801391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737983508428701782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fjhjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf06703-e2ec-43bd-b40a-122a18d8ac74,},Annotations:map[string]string{io.kubernetes.container.hash: 585490e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5a543dc1310644db29a32a35082356b196909c6665f69a5e6fa90c59857893,PodSandboxId:5f0ec6649dc573fc40e18e823f5b129f21fc011832f64fa3ef07b9defc6eba5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737983501021469598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xst5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a663f4a2-97a2-47d8-a72c-e03d3ced8d22,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb4f22d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005d8a880dea598f54a31ee1a0a203ef9a0028e2e9f4e2cd6d3090bebcf96f2c,PodSandboxId:d330161d1509697e5162b7b3b4565a6b10df3bbe5b104db1ee862e79a95ab81c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983501045518665,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
8e3aaa-e418-4ffc-bf99-37bca5a1ae34,},Annotations:map[string]string{io.kubernetes.container.hash: 9f847727,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c688ed418f017e8ebeaeef03bbc3f2858a0b7f6196be6a78e2b81c4640e6ea,PodSandboxId:a572d378ec86f9b8626b93b1a9226c05fbdf2d998cf65f716400e4316626966e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737983496014408498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 958d4b43872b1818f40d9b55ab2dbc8f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ce3d4cb1ce8b91c5e9f1690643422be2f117ebe5d03458f160461a15976994,PodSandboxId:7c8217389009f9256805fe85ae505b61ac15c6acb1fb50e2cd8f34b3b2409eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737983496080557912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d05bbeea2cc4a9e9449310
8146083a5,},Annotations:map[string]string{io.kubernetes.container.hash: 567333bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a1dd249516e3bdb2fe98a398f87a9e56f2adb647e6b666db1037f4dccea03b,PodSandboxId:b1c58baa82cba9c51a408d4a0d7d0064f035e030e73d5a61a4a72f88ee084ce8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737983496035329951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47e386aa45e0441347379653949262a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: e40c747b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:694d7a6722d4c99f648877a7de88f2b8483b65249fa166bc3aa4c12cf65f5e67,PodSandboxId:95841ad912d5072ed724cf72ba16914fdc1f4df00542cc13ae400f9dace221e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737983495979424025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3d1bd34925379d7721e60b87adc27b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=542486a7-e422-40c9-8e15-02fca414b7f7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.102928161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42384cba-ec49-450d-9742-81e65734d250 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.102990969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42384cba-ec49-450d-9742-81e65734d250 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.104479434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30959f30-fb31-4011-9eb2-12e8d7c8f7f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.105116980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983514105095584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30959f30-fb31-4011-9eb2-12e8d7c8f7f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.105738956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d00cc93-b540-4450-b11e-f3e554f5dab3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.105785423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d00cc93-b540-4450-b11e-f3e554f5dab3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.105972247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893420bef0e9537121d27415c1c61ab82a1e9155595baff8d2b88aa75456aa0,PodSandboxId:a1b83e54412d5aef6b067a00ed416067131b4fefff05a6c80c7a240e9b801391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737983508428701782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fjhjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf06703-e2ec-43bd-b40a-122a18d8ac74,},Annotations:map[string]string{io.kubernetes.container.hash: 585490e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5a543dc1310644db29a32a35082356b196909c6665f69a5e6fa90c59857893,PodSandboxId:5f0ec6649dc573fc40e18e823f5b129f21fc011832f64fa3ef07b9defc6eba5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737983501021469598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xst5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a663f4a2-97a2-47d8-a72c-e03d3ced8d22,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb4f22d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005d8a880dea598f54a31ee1a0a203ef9a0028e2e9f4e2cd6d3090bebcf96f2c,PodSandboxId:d330161d1509697e5162b7b3b4565a6b10df3bbe5b104db1ee862e79a95ab81c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983501045518665,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
8e3aaa-e418-4ffc-bf99-37bca5a1ae34,},Annotations:map[string]string{io.kubernetes.container.hash: 9f847727,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c688ed418f017e8ebeaeef03bbc3f2858a0b7f6196be6a78e2b81c4640e6ea,PodSandboxId:a572d378ec86f9b8626b93b1a9226c05fbdf2d998cf65f716400e4316626966e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737983496014408498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 958d4b43872b1818f40d9b55ab2dbc8f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ce3d4cb1ce8b91c5e9f1690643422be2f117ebe5d03458f160461a15976994,PodSandboxId:7c8217389009f9256805fe85ae505b61ac15c6acb1fb50e2cd8f34b3b2409eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737983496080557912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d05bbeea2cc4a9e9449310
8146083a5,},Annotations:map[string]string{io.kubernetes.container.hash: 567333bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a1dd249516e3bdb2fe98a398f87a9e56f2adb647e6b666db1037f4dccea03b,PodSandboxId:b1c58baa82cba9c51a408d4a0d7d0064f035e030e73d5a61a4a72f88ee084ce8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737983496035329951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47e386aa45e0441347379653949262a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: e40c747b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:694d7a6722d4c99f648877a7de88f2b8483b65249fa166bc3aa4c12cf65f5e67,PodSandboxId:95841ad912d5072ed724cf72ba16914fdc1f4df00542cc13ae400f9dace221e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737983495979424025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3d1bd34925379d7721e60b87adc27b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d00cc93-b540-4450-b11e-f3e554f5dab3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.140081357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=661f7423-5519-405c-be95-d3f9ea848f54 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.140148546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=661f7423-5519-405c-be95-d3f9ea848f54 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.141510135Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11f0b6a5-bb94-4c84-928f-8ba78d290ba7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.142013678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983514141991089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11f0b6a5-bb94-4c84-928f-8ba78d290ba7 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.142529913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a65e0ccc-6669-4ccb-be6b-5d8a153bb3d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.142578632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a65e0ccc-6669-4ccb-be6b-5d8a153bb3d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.142779119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893420bef0e9537121d27415c1c61ab82a1e9155595baff8d2b88aa75456aa0,PodSandboxId:a1b83e54412d5aef6b067a00ed416067131b4fefff05a6c80c7a240e9b801391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737983508428701782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fjhjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf06703-e2ec-43bd-b40a-122a18d8ac74,},Annotations:map[string]string{io.kubernetes.container.hash: 585490e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5a543dc1310644db29a32a35082356b196909c6665f69a5e6fa90c59857893,PodSandboxId:5f0ec6649dc573fc40e18e823f5b129f21fc011832f64fa3ef07b9defc6eba5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737983501021469598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xst5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a663f4a2-97a2-47d8-a72c-e03d3ced8d22,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb4f22d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005d8a880dea598f54a31ee1a0a203ef9a0028e2e9f4e2cd6d3090bebcf96f2c,PodSandboxId:d330161d1509697e5162b7b3b4565a6b10df3bbe5b104db1ee862e79a95ab81c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983501045518665,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
8e3aaa-e418-4ffc-bf99-37bca5a1ae34,},Annotations:map[string]string{io.kubernetes.container.hash: 9f847727,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c688ed418f017e8ebeaeef03bbc3f2858a0b7f6196be6a78e2b81c4640e6ea,PodSandboxId:a572d378ec86f9b8626b93b1a9226c05fbdf2d998cf65f716400e4316626966e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737983496014408498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 958d4b43872b1818f40d9b55ab2dbc8f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ce3d4cb1ce8b91c5e9f1690643422be2f117ebe5d03458f160461a15976994,PodSandboxId:7c8217389009f9256805fe85ae505b61ac15c6acb1fb50e2cd8f34b3b2409eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737983496080557912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d05bbeea2cc4a9e9449310
8146083a5,},Annotations:map[string]string{io.kubernetes.container.hash: 567333bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a1dd249516e3bdb2fe98a398f87a9e56f2adb647e6b666db1037f4dccea03b,PodSandboxId:b1c58baa82cba9c51a408d4a0d7d0064f035e030e73d5a61a4a72f88ee084ce8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737983496035329951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47e386aa45e0441347379653949262a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: e40c747b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:694d7a6722d4c99f648877a7de88f2b8483b65249fa166bc3aa4c12cf65f5e67,PodSandboxId:95841ad912d5072ed724cf72ba16914fdc1f4df00542cc13ae400f9dace221e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737983495979424025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3d1bd34925379d7721e60b87adc27b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a65e0ccc-6669-4ccb-be6b-5d8a153bb3d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.175972791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f769d812-d832-4f7e-862f-9e07e99becac name=/runtime.v1.RuntimeService/Version
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.176045156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f769d812-d832-4f7e-862f-9e07e99becac name=/runtime.v1.RuntimeService/Version
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.177073706Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b48c9a1-cdc7-4d28-b848-6964b1c06c58 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.177483715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983514177462944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b48c9a1-cdc7-4d28-b848-6964b1c06c58 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.178067394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a1a7a88-cf90-4a84-9be5-7d9736abcaa4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.178112583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a1a7a88-cf90-4a84-9be5-7d9736abcaa4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:11:54 test-preload-337839 crio[676]: time="2025-01-27 13:11:54.178294969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893420bef0e9537121d27415c1c61ab82a1e9155595baff8d2b88aa75456aa0,PodSandboxId:a1b83e54412d5aef6b067a00ed416067131b4fefff05a6c80c7a240e9b801391,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737983508428701782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fjhjv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf06703-e2ec-43bd-b40a-122a18d8ac74,},Annotations:map[string]string{io.kubernetes.container.hash: 585490e2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5a543dc1310644db29a32a35082356b196909c6665f69a5e6fa90c59857893,PodSandboxId:5f0ec6649dc573fc40e18e823f5b129f21fc011832f64fa3ef07b9defc6eba5b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737983501021469598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xst5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a663f4a2-97a2-47d8-a72c-e03d3ced8d22,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb4f22d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005d8a880dea598f54a31ee1a0a203ef9a0028e2e9f4e2cd6d3090bebcf96f2c,PodSandboxId:d330161d1509697e5162b7b3b4565a6b10df3bbe5b104db1ee862e79a95ab81c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737983501045518665,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98
8e3aaa-e418-4ffc-bf99-37bca5a1ae34,},Annotations:map[string]string{io.kubernetes.container.hash: 9f847727,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16c688ed418f017e8ebeaeef03bbc3f2858a0b7f6196be6a78e2b81c4640e6ea,PodSandboxId:a572d378ec86f9b8626b93b1a9226c05fbdf2d998cf65f716400e4316626966e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737983496014408498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 958d4b43872b1818f40d9b55ab2dbc8f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75ce3d4cb1ce8b91c5e9f1690643422be2f117ebe5d03458f160461a15976994,PodSandboxId:7c8217389009f9256805fe85ae505b61ac15c6acb1fb50e2cd8f34b3b2409eb2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737983496080557912,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d05bbeea2cc4a9e9449310
8146083a5,},Annotations:map[string]string{io.kubernetes.container.hash: 567333bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03a1dd249516e3bdb2fe98a398f87a9e56f2adb647e6b666db1037f4dccea03b,PodSandboxId:b1c58baa82cba9c51a408d4a0d7d0064f035e030e73d5a61a4a72f88ee084ce8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737983496035329951,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47e386aa45e0441347379653949262a2,}
,Annotations:map[string]string{io.kubernetes.container.hash: e40c747b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:694d7a6722d4c99f648877a7de88f2b8483b65249fa166bc3aa4c12cf65f5e67,PodSandboxId:95841ad912d5072ed724cf72ba16914fdc1f4df00542cc13ae400f9dace221e7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737983495979424025,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-337839,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a3d1bd34925379d7721e60b87adc27b,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a1a7a88-cf90-4a84-9be5-7d9736abcaa4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5893420bef0e9       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   a1b83e54412d5       coredns-6d4b75cb6d-fjhjv
	005d8a880dea5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   d330161d15096       storage-provisioner
	dd5a543dc1310       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   5f0ec6649dc57       kube-proxy-xst5f
	75ce3d4cb1ce8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   7c8217389009f       etcd-test-preload-337839
	03a1dd249516e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   b1c58baa82cba       kube-apiserver-test-preload-337839
	16c688ed418f0       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   a572d378ec86f       kube-controller-manager-test-preload-337839
	694d7a6722d4c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   95841ad912d50       kube-scheduler-test-preload-337839
	
	
	==> coredns [5893420bef0e9537121d27415c1c61ab82a1e9155595baff8d2b88aa75456aa0] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:42334 - 20190 "HINFO IN 1323147486428682295.5870336457930667820. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01197633s
	
	
	==> describe nodes <==
	Name:               test-preload-337839
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-337839
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=test-preload-337839
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_10_08_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:10:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-337839
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:11:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:11:50 +0000   Mon, 27 Jan 2025 13:10:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:11:50 +0000   Mon, 27 Jan 2025 13:10:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:11:50 +0000   Mon, 27 Jan 2025 13:10:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:11:50 +0000   Mon, 27 Jan 2025 13:11:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    test-preload-337839
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 241fb70e27ea4409a8e70f9eeaf68638
	  System UUID:                241fb70e-27ea-4409-a8e7-0f9eeaf68638
	  Boot ID:                    2223bb6e-19b0-469b-9b08-38deaded0754
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fjhjv                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-test-preload-337839                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         108s
	  kube-system                 kube-apiserver-test-preload-337839             250m (12%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-test-preload-337839    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-xst5f                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-test-preload-337839             100m (5%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  Starting                 12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  113s (x6 over 114s)  kubelet          Node test-preload-337839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s (x5 over 114s)  kubelet          Node test-preload-337839 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s (x5 over 114s)  kubelet          Node test-preload-337839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s                 kubelet          Node test-preload-337839 status is now: NodeHasSufficientPID
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  106s                 kubelet          Node test-preload-337839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s                 kubelet          Node test-preload-337839 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                96s                  kubelet          Node test-preload-337839 status is now: NodeReady
	  Normal  RegisteredNode           94s                  node-controller  Node test-preload-337839 event: Registered Node test-preload-337839 in Controller
	  Normal  Starting                 19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)    kubelet          Node test-preload-337839 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)    kubelet          Node test-preload-337839 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)    kubelet          Node test-preload-337839 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                   node-controller  Node test-preload-337839 event: Registered Node test-preload-337839 in Controller
	
	
	==> dmesg <==
	[Jan27 13:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052260] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041016] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.916755] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.794104] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.602926] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.223200] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.055726] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056589] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.150687] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.120787] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.255855] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[ +13.165480] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.059290] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.163275] systemd-fstab-generator[1127]: Ignoring "noauto" option for root device
	[  +5.097612] kauditd_printk_skb: 105 callbacks suppressed
	[  +8.061810] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.373624] systemd-fstab-generator[1937]: Ignoring "noauto" option for root device
	
	
	==> etcd [75ce3d4cb1ce8b91c5e9f1690643422be2f117ebe5d03458f160461a15976994] <==
	{"level":"info","ts":"2025-01-27T13:11:36.510Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d33e7f1dba1e46ae","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T13:11:36.511Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T13:11:36.512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae switched to configuration voters=(15221743556212180654)"}
	{"level":"info","ts":"2025-01-27T13:11:36.517Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","added-peer-id":"d33e7f1dba1e46ae","added-peer-peer-urls":["https://192.168.39.80:2380"]}
	{"level":"info","ts":"2025-01-27T13:11:36.518Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e6a6fd39da75dc67","local-member-id":"d33e7f1dba1e46ae","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:11:36.520Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:11:36.520Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T13:11:36.520Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2025-01-27T13:11:36.539Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.80:2380"}
	{"level":"info","ts":"2025-01-27T13:11:36.539Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d33e7f1dba1e46ae","initial-advertise-peer-urls":["https://192.168.39.80:2380"],"listen-peer-urls":["https://192.168.39.80:2380"],"advertise-client-urls":["https://192.168.39.80:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.80:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T13:11:36.540Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T13:11:37.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T13:11:37.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T13:11:37.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgPreVoteResp from d33e7f1dba1e46ae at term 2"}
	{"level":"info","ts":"2025-01-27T13:11:37.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T13:11:37.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae received MsgVoteResp from d33e7f1dba1e46ae at term 3"}
	{"level":"info","ts":"2025-01-27T13:11:37.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d33e7f1dba1e46ae became leader at term 3"}
	{"level":"info","ts":"2025-01-27T13:11:37.942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d33e7f1dba1e46ae elected leader d33e7f1dba1e46ae at term 3"}
	{"level":"info","ts":"2025-01-27T13:11:37.943Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d33e7f1dba1e46ae","local-member-attributes":"{Name:test-preload-337839 ClientURLs:[https://192.168.39.80:2379]}","request-path":"/0/members/d33e7f1dba1e46ae/attributes","cluster-id":"e6a6fd39da75dc67","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T13:11:37.943Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:11:37.944Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:11:37.945Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T13:11:37.946Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.80:2379"}
	{"level":"info","ts":"2025-01-27T13:11:37.946Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T13:11:37.946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:11:54 up 0 min,  0 users,  load average: 1.88, 0.50, 0.17
	Linux test-preload-337839 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [03a1dd249516e3bdb2fe98a398f87a9e56f2adb647e6b666db1037f4dccea03b] <==
	I0127 13:11:40.233805       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0127 13:11:40.233869       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 13:11:40.233941       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0127 13:11:40.241570       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0127 13:11:40.241597       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0127 13:11:40.248179       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 13:11:40.264257       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 13:11:40.317515       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0127 13:11:40.337988       1 cache.go:39] Caches are synced for autoregister controller
	I0127 13:11:40.338155       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0127 13:11:40.342259       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0127 13:11:40.357879       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0127 13:11:40.403456       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 13:11:40.411744       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0127 13:11:40.414166       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 13:11:40.874358       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0127 13:11:41.219941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 13:11:41.452043       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0127 13:11:41.999782       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0127 13:11:42.012418       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0127 13:11:42.045419       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0127 13:11:42.061162       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 13:11:42.066287       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 13:11:52.630555       1 controller.go:611] quota admission added evaluator for: endpoints
	I0127 13:11:52.838985       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [16c688ed418f017e8ebeaeef03bbc3f2858a0b7f6196be6a78e2b81c4640e6ea] <==
	I0127 13:11:52.766585       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 13:11:52.802377       1 shared_informer.go:262] Caches are synced for resource quota
	I0127 13:11:52.808289       1 shared_informer.go:262] Caches are synced for disruption
	I0127 13:11:52.808354       1 disruption.go:371] Sending events to api server.
	W0127 13:11:52.809663       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-337839" does not exist
	I0127 13:11:52.811909       1 shared_informer.go:262] Caches are synced for daemon sets
	I0127 13:11:52.815405       1 shared_informer.go:262] Caches are synced for taint
	I0127 13:11:52.815672       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0127 13:11:52.815778       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0127 13:11:52.815988       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-337839. Assuming now as a timestamp.
	I0127 13:11:52.816268       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0127 13:11:52.816440       1 event.go:294] "Event occurred" object="test-preload-337839" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-337839 event: Registered Node test-preload-337839 in Controller"
	I0127 13:11:52.821109       1 shared_informer.go:262] Caches are synced for stateful set
	I0127 13:11:52.831444       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0127 13:11:52.833217       1 shared_informer.go:262] Caches are synced for GC
	I0127 13:11:52.841895       1 shared_informer.go:262] Caches are synced for TTL
	I0127 13:11:52.850376       1 shared_informer.go:262] Caches are synced for attach detach
	I0127 13:11:52.874188       1 shared_informer.go:262] Caches are synced for node
	I0127 13:11:52.874333       1 range_allocator.go:173] Starting range CIDR allocator
	I0127 13:11:52.874428       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0127 13:11:52.874502       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0127 13:11:52.885825       1 shared_informer.go:262] Caches are synced for persistent volume
	I0127 13:11:53.305522       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 13:11:53.348262       1 shared_informer.go:262] Caches are synced for garbage collector
	I0127 13:11:53.348349       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [dd5a543dc1310644db29a32a35082356b196909c6665f69a5e6fa90c59857893] <==
	I0127 13:11:41.369575       1 node.go:163] Successfully retrieved node IP: 192.168.39.80
	I0127 13:11:41.369725       1 server_others.go:138] "Detected node IP" address="192.168.39.80"
	I0127 13:11:41.369781       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0127 13:11:41.440254       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0127 13:11:41.440347       1 server_others.go:206] "Using iptables Proxier"
	I0127 13:11:41.441336       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0127 13:11:41.442099       1 server.go:661] "Version info" version="v1.24.4"
	I0127 13:11:41.442569       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:11:41.445600       1 config.go:317] "Starting service config controller"
	I0127 13:11:41.447931       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0127 13:11:41.445874       1 config.go:444] "Starting node config controller"
	I0127 13:11:41.447689       1 config.go:226] "Starting endpoint slice config controller"
	I0127 13:11:41.449043       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0127 13:11:41.449009       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0127 13:11:41.548698       1 shared_informer.go:262] Caches are synced for service config
	I0127 13:11:41.549815       1 shared_informer.go:262] Caches are synced for node config
	I0127 13:11:41.549833       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [694d7a6722d4c99f648877a7de88f2b8483b65249fa166bc3aa4c12cf65f5e67] <==
	I0127 13:11:36.910995       1 serving.go:348] Generated self-signed cert in-memory
	W0127 13:11:40.275478       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 13:11:40.275571       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 13:11:40.275599       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 13:11:40.275709       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 13:11:40.321127       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0127 13:11:40.321231       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:11:40.339662       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 13:11:40.339737       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 13:11:40.341508       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0127 13:11:40.346437       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0127 13:11:40.440702       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.305747    1134 topology_manager.go:200] "Topology Admit Handler"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: E0127 13:11:40.308071    1134 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fjhjv" podUID=7cf06703-e2ec-43bd-b40a-122a18d8ac74
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359013    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a663f4a2-97a2-47d8-a72c-e03d3ced8d22-lib-modules\") pod \"kube-proxy-xst5f\" (UID: \"a663f4a2-97a2-47d8-a72c-e03d3ced8d22\") " pod="kube-system/kube-proxy-xst5f"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359070    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/988e3aaa-e418-4ffc-bf99-37bca5a1ae34-tmp\") pod \"storage-provisioner\" (UID: \"988e3aaa-e418-4ffc-bf99-37bca5a1ae34\") " pod="kube-system/storage-provisioner"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359091    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a663f4a2-97a2-47d8-a72c-e03d3ced8d22-kube-proxy\") pod \"kube-proxy-xst5f\" (UID: \"a663f4a2-97a2-47d8-a72c-e03d3ced8d22\") " pod="kube-system/kube-proxy-xst5f"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359107    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a663f4a2-97a2-47d8-a72c-e03d3ced8d22-xtables-lock\") pod \"kube-proxy-xst5f\" (UID: \"a663f4a2-97a2-47d8-a72c-e03d3ced8d22\") " pod="kube-system/kube-proxy-xst5f"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359175    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cllh8\" (UniqueName: \"kubernetes.io/projected/a663f4a2-97a2-47d8-a72c-e03d3ced8d22-kube-api-access-cllh8\") pod \"kube-proxy-xst5f\" (UID: \"a663f4a2-97a2-47d8-a72c-e03d3ced8d22\") " pod="kube-system/kube-proxy-xst5f"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359222    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume\") pod \"coredns-6d4b75cb6d-fjhjv\" (UID: \"7cf06703-e2ec-43bd-b40a-122a18d8ac74\") " pod="kube-system/coredns-6d4b75cb6d-fjhjv"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359265    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvcm\" (UniqueName: \"kubernetes.io/projected/7cf06703-e2ec-43bd-b40a-122a18d8ac74-kube-api-access-chvcm\") pod \"coredns-6d4b75cb6d-fjhjv\" (UID: \"7cf06703-e2ec-43bd-b40a-122a18d8ac74\") " pod="kube-system/coredns-6d4b75cb6d-fjhjv"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359300    1134 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbdp\" (UniqueName: \"kubernetes.io/projected/988e3aaa-e418-4ffc-bf99-37bca5a1ae34-kube-api-access-snbdp\") pod \"storage-provisioner\" (UID: \"988e3aaa-e418-4ffc-bf99-37bca5a1ae34\") " pod="kube-system/storage-provisioner"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.359382    1134 reconciler.go:159] "Reconciler: start to sync state"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.376520    1134 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-337839"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.376591    1134 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-337839"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: E0127 13:11:40.381788    1134 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: I0127 13:11:40.387074    1134 setters.go:532] "Node became not ready" node="test-preload-337839" condition={Type:Ready Status:False LastHeartbeatTime:2025-01-27 13:11:40.387026136 +0000 UTC m=+5.248603884 LastTransitionTime:2025-01-27 13:11:40.387026136 +0000 UTC m=+5.248603884 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: E0127 13:11:40.463953    1134 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: E0127 13:11:40.464060    1134 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume podName:7cf06703-e2ec-43bd-b40a-122a18d8ac74 nodeName:}" failed. No retries permitted until 2025-01-27 13:11:40.964030617 +0000 UTC m=+5.825608354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume") pod "coredns-6d4b75cb6d-fjhjv" (UID: "7cf06703-e2ec-43bd-b40a-122a18d8ac74") : object "kube-system"/"coredns" not registered
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: E0127 13:11:40.967390    1134 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:11:40 test-preload-337839 kubelet[1134]: E0127 13:11:40.967477    1134 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume podName:7cf06703-e2ec-43bd-b40a-122a18d8ac74 nodeName:}" failed. No retries permitted until 2025-01-27 13:11:41.967461639 +0000 UTC m=+6.829039387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume") pod "coredns-6d4b75cb6d-fjhjv" (UID: "7cf06703-e2ec-43bd-b40a-122a18d8ac74") : object "kube-system"/"coredns" not registered
	Jan 27 13:11:41 test-preload-337839 kubelet[1134]: E0127 13:11:41.976407    1134 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:11:41 test-preload-337839 kubelet[1134]: E0127 13:11:41.976498    1134 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume podName:7cf06703-e2ec-43bd-b40a-122a18d8ac74 nodeName:}" failed. No retries permitted until 2025-01-27 13:11:43.976484979 +0000 UTC m=+8.838062716 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume") pod "coredns-6d4b75cb6d-fjhjv" (UID: "7cf06703-e2ec-43bd-b40a-122a18d8ac74") : object "kube-system"/"coredns" not registered
	Jan 27 13:11:42 test-preload-337839 kubelet[1134]: E0127 13:11:42.388550    1134 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fjhjv" podUID=7cf06703-e2ec-43bd-b40a-122a18d8ac74
	Jan 27 13:11:43 test-preload-337839 kubelet[1134]: E0127 13:11:43.989270    1134 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 27 13:11:43 test-preload-337839 kubelet[1134]: E0127 13:11:43.989364    1134 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume podName:7cf06703-e2ec-43bd-b40a-122a18d8ac74 nodeName:}" failed. No retries permitted until 2025-01-27 13:11:47.989348186 +0000 UTC m=+12.850925924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7cf06703-e2ec-43bd-b40a-122a18d8ac74-config-volume") pod "coredns-6d4b75cb6d-fjhjv" (UID: "7cf06703-e2ec-43bd-b40a-122a18d8ac74") : object "kube-system"/"coredns" not registered
	Jan 27 13:11:44 test-preload-337839 kubelet[1134]: E0127 13:11:44.389049    1134 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fjhjv" podUID=7cf06703-e2ec-43bd-b40a-122a18d8ac74
	
	
	==> storage-provisioner [005d8a880dea598f54a31ee1a0a203ef9a0028e2e9f4e2cd6d3090bebcf96f2c] <==
	I0127 13:11:41.148312       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-337839 -n test-preload-337839
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-337839 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-337839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-337839
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-337839: (1.167051969s)
--- FAIL: TestPreload (196.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (409s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m56.420089516s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-511736] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-511736" primary control-plane node in "kubernetes-upgrade-511736" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:13:51.810568  402825 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:13:51.810686  402825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:51.810697  402825 out.go:358] Setting ErrFile to fd 2...
	I0127 13:13:51.810702  402825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:13:51.810925  402825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:13:51.811544  402825 out.go:352] Setting JSON to false
	I0127 13:13:51.812471  402825 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21372,"bootTime":1737962260,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:13:51.812535  402825 start.go:139] virtualization: kvm guest
	I0127 13:13:51.816775  402825 out.go:177] * [kubernetes-upgrade-511736] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:13:51.818576  402825 notify.go:220] Checking for updates...
	I0127 13:13:51.819669  402825 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:13:51.821981  402825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:13:51.824349  402825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:13:51.825738  402825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:13:51.827579  402825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:13:51.829996  402825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:13:51.831307  402825 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:13:51.870637  402825 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 13:13:51.871893  402825 start.go:297] selected driver: kvm2
	I0127 13:13:51.871907  402825 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:13:51.871926  402825 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:13:51.872892  402825 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:13:51.888933  402825 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:13:51.905528  402825 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:13:51.905586  402825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:13:51.905846  402825 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 13:13:51.905878  402825 cni.go:84] Creating CNI manager for ""
	I0127 13:13:51.905933  402825 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:13:51.905944  402825 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 13:13:51.905999  402825 start.go:340] cluster config:
	{Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:13:51.906098  402825 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:13:51.907686  402825 out.go:177] * Starting "kubernetes-upgrade-511736" primary control-plane node in "kubernetes-upgrade-511736" cluster
	I0127 13:13:51.908955  402825 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:13:51.908993  402825 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 13:13:51.909019  402825 cache.go:56] Caching tarball of preloaded images
	I0127 13:13:51.909103  402825 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:13:51.909118  402825 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 13:13:51.909440  402825 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/config.json ...
	I0127 13:13:51.909469  402825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/config.json: {Name:mk6d9fd16d16940f345202d15478fc8c792ad26d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:13:51.909615  402825 start.go:360] acquireMachinesLock for kubernetes-upgrade-511736: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:14:16.663373  402825 start.go:364] duration metric: took 24.753706132s to acquireMachinesLock for "kubernetes-upgrade-511736"
	I0127 13:14:16.663445  402825 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:14:16.663576  402825 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 13:14:16.665556  402825 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 13:14:16.665782  402825 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:14:16.665859  402825 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:14:16.684574  402825 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
	I0127 13:14:16.684982  402825 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:14:16.685543  402825 main.go:141] libmachine: Using API Version  1
	I0127 13:14:16.685571  402825 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:14:16.685913  402825 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:14:16.686168  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetMachineName
	I0127 13:14:16.686319  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:16.686574  402825 start.go:159] libmachine.API.Create for "kubernetes-upgrade-511736" (driver="kvm2")
	I0127 13:14:16.686610  402825 client.go:168] LocalClient.Create starting
	I0127 13:14:16.686644  402825 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem
	I0127 13:14:16.686684  402825 main.go:141] libmachine: Decoding PEM data...
	I0127 13:14:16.686703  402825 main.go:141] libmachine: Parsing certificate...
	I0127 13:14:16.686778  402825 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem
	I0127 13:14:16.686826  402825 main.go:141] libmachine: Decoding PEM data...
	I0127 13:14:16.686844  402825 main.go:141] libmachine: Parsing certificate...
	I0127 13:14:16.686866  402825 main.go:141] libmachine: Running pre-create checks...
	I0127 13:14:16.686876  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .PreCreateCheck
	I0127 13:14:16.687235  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetConfigRaw
	I0127 13:14:16.687663  402825 main.go:141] libmachine: Creating machine...
	I0127 13:14:16.687677  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .Create
	I0127 13:14:16.687828  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) creating KVM machine...
	I0127 13:14:16.687864  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) creating network...
	I0127 13:14:16.689074  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found existing default KVM network
	I0127 13:14:16.689863  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:16.689727  403179 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:0c:e1} reservation:<nil>}
	I0127 13:14:16.690471  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:16.690390  403179 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026c2a0}
	I0127 13:14:16.690495  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | created network xml: 
	I0127 13:14:16.690507  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | <network>
	I0127 13:14:16.690519  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |   <name>mk-kubernetes-upgrade-511736</name>
	I0127 13:14:16.690550  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |   <dns enable='no'/>
	I0127 13:14:16.690563  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |   
	I0127 13:14:16.690575  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0127 13:14:16.690594  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |     <dhcp>
	I0127 13:14:16.690609  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0127 13:14:16.690619  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |     </dhcp>
	I0127 13:14:16.690628  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |   </ip>
	I0127 13:14:16.690634  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG |   
	I0127 13:14:16.690645  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | </network>
	I0127 13:14:16.690655  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | 
	I0127 13:14:16.695617  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | trying to create private KVM network mk-kubernetes-upgrade-511736 192.168.50.0/24...
	I0127 13:14:16.762702  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | private KVM network mk-kubernetes-upgrade-511736 192.168.50.0/24 created
	I0127 13:14:16.762811  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) setting up store path in /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736 ...
	I0127 13:14:16.762846  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) building disk image from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 13:14:16.762909  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:16.762859  403179 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:14:16.763106  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Downloading /home/jenkins/minikube-integration/20317-361578/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 13:14:17.045774  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:17.045671  403179 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa...
	I0127 13:14:17.091558  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:17.091426  403179 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/kubernetes-upgrade-511736.rawdisk...
	I0127 13:14:17.091589  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | Writing magic tar header
	I0127 13:14:17.091604  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | Writing SSH key tar header
	I0127 13:14:17.091616  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:17.091554  403179 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736 ...
	I0127 13:14:17.091705  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736
	I0127 13:14:17.091733  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines
	I0127 13:14:17.091822  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736 (perms=drwx------)
	I0127 13:14:17.091859  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines (perms=drwxr-xr-x)
	I0127 13:14:17.091872  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:14:17.091897  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578
	I0127 13:14:17.091910  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 13:14:17.091925  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | checking permissions on dir: /home/jenkins
	I0127 13:14:17.091936  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | checking permissions on dir: /home
	I0127 13:14:17.091949  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube (perms=drwxr-xr-x)
	I0127 13:14:17.091972  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) setting executable bit set on /home/jenkins/minikube-integration/20317-361578 (perms=drwxrwxr-x)
	I0127 13:14:17.091988  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 13:14:17.092000  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | skipping /home - not owner
	I0127 13:14:17.092016  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 13:14:17.092025  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) creating domain...
	I0127 13:14:17.092992  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) define libvirt domain using xml: 
	I0127 13:14:17.093013  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) <domain type='kvm'>
	I0127 13:14:17.093024  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   <name>kubernetes-upgrade-511736</name>
	I0127 13:14:17.093033  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   <memory unit='MiB'>2200</memory>
	I0127 13:14:17.093045  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   <vcpu>2</vcpu>
	I0127 13:14:17.093054  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   <features>
	I0127 13:14:17.093063  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <acpi/>
	I0127 13:14:17.093079  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <apic/>
	I0127 13:14:17.093090  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <pae/>
	I0127 13:14:17.093096  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     
	I0127 13:14:17.093175  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   </features>
	I0127 13:14:17.093216  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   <cpu mode='host-passthrough'>
	I0127 13:14:17.093226  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   
	I0127 13:14:17.093240  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   </cpu>
	I0127 13:14:17.093255  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   <os>
	I0127 13:14:17.093265  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <type>hvm</type>
	I0127 13:14:17.093274  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <boot dev='cdrom'/>
	I0127 13:14:17.093284  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <boot dev='hd'/>
	I0127 13:14:17.093292  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <bootmenu enable='no'/>
	I0127 13:14:17.093302  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   </os>
	I0127 13:14:17.093311  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   <devices>
	I0127 13:14:17.093336  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <disk type='file' device='cdrom'>
	I0127 13:14:17.093362  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/boot2docker.iso'/>
	I0127 13:14:17.093376  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <target dev='hdc' bus='scsi'/>
	I0127 13:14:17.093385  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <readonly/>
	I0127 13:14:17.093394  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     </disk>
	I0127 13:14:17.093415  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <disk type='file' device='disk'>
	I0127 13:14:17.093440  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 13:14:17.093455  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/kubernetes-upgrade-511736.rawdisk'/>
	I0127 13:14:17.093460  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <target dev='hda' bus='virtio'/>
	I0127 13:14:17.093465  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     </disk>
	I0127 13:14:17.093470  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <interface type='network'>
	I0127 13:14:17.093475  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <source network='mk-kubernetes-upgrade-511736'/>
	I0127 13:14:17.093483  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <model type='virtio'/>
	I0127 13:14:17.093488  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     </interface>
	I0127 13:14:17.093497  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <interface type='network'>
	I0127 13:14:17.093539  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <source network='default'/>
	I0127 13:14:17.093560  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <model type='virtio'/>
	I0127 13:14:17.093570  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     </interface>
	I0127 13:14:17.093581  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <serial type='pty'>
	I0127 13:14:17.093590  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <target port='0'/>
	I0127 13:14:17.093600  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     </serial>
	I0127 13:14:17.093612  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <console type='pty'>
	I0127 13:14:17.093622  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <target type='serial' port='0'/>
	I0127 13:14:17.093630  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     </console>
	I0127 13:14:17.093640  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     <rng model='virtio'>
	I0127 13:14:17.093648  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)       <backend model='random'>/dev/random</backend>
	I0127 13:14:17.093663  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     </rng>
	I0127 13:14:17.093674  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     
	I0127 13:14:17.093682  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)     
	I0127 13:14:17.093689  402825 main.go:141] libmachine: (kubernetes-upgrade-511736)   </devices>
	I0127 13:14:17.093699  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) </domain>
	I0127 13:14:17.093709  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) 
	I0127 13:14:17.098184  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:10:90:41 in network default
	I0127 13:14:17.098891  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) starting domain...
	I0127 13:14:17.098920  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:17.098929  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) ensuring networks are active...
	I0127 13:14:17.099588  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Ensuring network default is active
	I0127 13:14:17.099878  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Ensuring network mk-kubernetes-upgrade-511736 is active
	I0127 13:14:17.100393  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) getting domain XML...
	I0127 13:14:17.101211  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) creating domain...
	I0127 13:14:18.432261  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) waiting for IP...
	I0127 13:14:18.433222  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:18.433689  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:18.433802  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:18.433700  403179 retry.go:31] will retry after 244.851949ms: waiting for domain to come up
	I0127 13:14:18.680319  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:18.680928  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:18.681030  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:18.680932  403179 retry.go:31] will retry after 316.635315ms: waiting for domain to come up
	I0127 13:14:18.999739  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:19.000276  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:19.000310  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:19.000214  403179 retry.go:31] will retry after 379.446694ms: waiting for domain to come up
	I0127 13:14:19.380899  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:19.381381  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:19.381413  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:19.381353  403179 retry.go:31] will retry after 446.879ms: waiting for domain to come up
	I0127 13:14:19.830226  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:19.830693  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:19.830734  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:19.830681  403179 retry.go:31] will retry after 674.366948ms: waiting for domain to come up
	I0127 13:14:20.506179  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:20.506666  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:20.506700  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:20.506626  403179 retry.go:31] will retry after 635.766984ms: waiting for domain to come up
	I0127 13:14:21.144206  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:21.144722  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:21.144769  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:21.144681  403179 retry.go:31] will retry after 868.433294ms: waiting for domain to come up
	I0127 13:14:22.015348  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:22.015754  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:22.015817  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:22.015742  403179 retry.go:31] will retry after 1.012397416s: waiting for domain to come up
	I0127 13:14:23.029641  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:23.030147  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:23.030177  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:23.030093  403179 retry.go:31] will retry after 1.335007517s: waiting for domain to come up
	I0127 13:14:24.366764  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:24.367203  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:24.367245  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:24.367119  403179 retry.go:31] will retry after 1.859453893s: waiting for domain to come up
	I0127 13:14:26.229240  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:26.229718  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:26.229747  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:26.229664  403179 retry.go:31] will retry after 2.207485995s: waiting for domain to come up
	I0127 13:14:28.439672  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:28.440213  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:28.440246  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:28.440171  403179 retry.go:31] will retry after 3.450166122s: waiting for domain to come up
	I0127 13:14:31.892072  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:31.892578  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:31.892609  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:31.892552  403179 retry.go:31] will retry after 2.954286466s: waiting for domain to come up
	I0127 13:14:34.850584  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:34.851008  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find current IP address of domain kubernetes-upgrade-511736 in network mk-kubernetes-upgrade-511736
	I0127 13:14:34.851034  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | I0127 13:14:34.850984  403179 retry.go:31] will retry after 4.342568912s: waiting for domain to come up
	I0127 13:14:39.195676  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.196129  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) found domain IP: 192.168.50.10
	I0127 13:14:39.196157  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) reserving static IP address...
	I0127 13:14:39.196184  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has current primary IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.196563  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-511736", mac: "52:54:00:c4:f9:6f", ip: "192.168.50.10"} in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.272916  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) reserved static IP address 192.168.50.10 for domain kubernetes-upgrade-511736
	I0127 13:14:39.272951  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | Getting to WaitForSSH function...
	I0127 13:14:39.272978  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) waiting for SSH...
	I0127 13:14:39.275918  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.276455  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:39.276495  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.276614  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | Using SSH client type: external
	I0127 13:14:39.276653  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa (-rw-------)
	I0127 13:14:39.276692  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.10 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:14:39.276706  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | About to run SSH command:
	I0127 13:14:39.276718  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | exit 0
	I0127 13:14:39.411250  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | SSH cmd err, output: <nil>: 
	I0127 13:14:39.411490  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) KVM machine creation complete
	I0127 13:14:39.411889  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetConfigRaw
	I0127 13:14:39.412525  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:39.412740  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:39.412911  402825 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 13:14:39.412928  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetState
	I0127 13:14:39.414681  402825 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 13:14:39.414701  402825 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 13:14:39.414708  402825 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 13:14:39.414717  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:39.417559  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.418015  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:39.418048  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.418238  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:39.418438  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.418636  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.418802  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:39.419001  402825 main.go:141] libmachine: Using SSH client type: native
	I0127 13:14:39.419306  402825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 13:14:39.419321  402825 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 13:14:39.530710  402825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:14:39.530739  402825 main.go:141] libmachine: Detecting the provisioner...
	I0127 13:14:39.530750  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:39.533988  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.534599  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:39.534641  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.534843  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:39.535092  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.535286  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.535488  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:39.535697  402825 main.go:141] libmachine: Using SSH client type: native
	I0127 13:14:39.535878  402825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 13:14:39.535888  402825 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 13:14:39.656548  402825 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 13:14:39.656648  402825 main.go:141] libmachine: found compatible host: buildroot
	I0127 13:14:39.656663  402825 main.go:141] libmachine: Provisioning with buildroot...
	I0127 13:14:39.656676  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetMachineName
	I0127 13:14:39.656950  402825 buildroot.go:166] provisioning hostname "kubernetes-upgrade-511736"
	I0127 13:14:39.656989  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetMachineName
	I0127 13:14:39.657199  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:39.660209  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.660591  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:39.660618  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.660824  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:39.661030  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.661234  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.661379  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:39.661593  402825 main.go:141] libmachine: Using SSH client type: native
	I0127 13:14:39.661831  402825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 13:14:39.661853  402825 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-511736 && echo "kubernetes-upgrade-511736" | sudo tee /etc/hostname
	I0127 13:14:39.794662  402825 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-511736
	
	I0127 13:14:39.794699  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:39.798070  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.798517  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:39.798587  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.798753  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:39.798957  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.799136  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:39.799295  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:39.799498  402825 main.go:141] libmachine: Using SSH client type: native
	I0127 13:14:39.799701  402825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 13:14:39.799725  402825 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-511736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-511736/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-511736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:14:39.929551  402825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:14:39.929587  402825 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:14:39.929612  402825 buildroot.go:174] setting up certificates
	I0127 13:14:39.929629  402825 provision.go:84] configureAuth start
	I0127 13:14:39.929643  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetMachineName
	I0127 13:14:39.929918  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetIP
	I0127 13:14:39.932480  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.932862  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:39.932885  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.933049  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:39.935037  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.935405  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:39.935437  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:39.935589  402825 provision.go:143] copyHostCerts
	I0127 13:14:39.935653  402825 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:14:39.935672  402825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:14:39.935724  402825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:14:39.935824  402825 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:14:39.935832  402825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:14:39.935852  402825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:14:39.935916  402825 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:14:39.935932  402825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:14:39.935953  402825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:14:39.936014  402825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-511736 san=[127.0.0.1 192.168.50.10 kubernetes-upgrade-511736 localhost minikube]
	I0127 13:14:40.081678  402825 provision.go:177] copyRemoteCerts
	I0127 13:14:40.081737  402825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:14:40.081763  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:40.084604  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.085034  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.085068  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.085197  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:40.085364  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.085543  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:40.085670  402825 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa Username:docker}
	I0127 13:14:40.168595  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:14:40.193936  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:14:40.217832  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 13:14:40.244403  402825 provision.go:87] duration metric: took 314.760893ms to configureAuth
	I0127 13:14:40.244433  402825 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:14:40.244641  402825 config.go:182] Loaded profile config "kubernetes-upgrade-511736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 13:14:40.244745  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:40.247367  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.247669  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.247695  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.247858  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:40.248047  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.248191  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.248335  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:40.248453  402825 main.go:141] libmachine: Using SSH client type: native
	I0127 13:14:40.248621  402825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 13:14:40.248634  402825 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:14:40.478168  402825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:14:40.478204  402825 main.go:141] libmachine: Checking connection to Docker...
	I0127 13:14:40.478217  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetURL
	I0127 13:14:40.479467  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | using libvirt version 6000000
	I0127 13:14:40.482829  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.483233  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.483271  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.483413  402825 main.go:141] libmachine: Docker is up and running!
	I0127 13:14:40.483448  402825 main.go:141] libmachine: Reticulating splines...
	I0127 13:14:40.483457  402825 client.go:171] duration metric: took 23.796836226s to LocalClient.Create
	I0127 13:14:40.483489  402825 start.go:167] duration metric: took 23.796918199s to libmachine.API.Create "kubernetes-upgrade-511736"
	I0127 13:14:40.483502  402825 start.go:293] postStartSetup for "kubernetes-upgrade-511736" (driver="kvm2")
	I0127 13:14:40.483517  402825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:14:40.483535  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:40.483779  402825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:14:40.483804  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:40.485977  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.486304  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.486347  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.486453  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:40.486667  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.486839  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:40.486992  402825 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa Username:docker}
	I0127 13:14:40.572667  402825 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:14:40.576959  402825 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:14:40.576978  402825 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:14:40.577054  402825 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:14:40.577149  402825 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:14:40.577265  402825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:14:40.586134  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:14:40.610559  402825 start.go:296] duration metric: took 127.039952ms for postStartSetup
	I0127 13:14:40.610608  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetConfigRaw
	I0127 13:14:40.611224  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetIP
	I0127 13:14:40.613798  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.614146  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.614189  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.614461  402825 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/config.json ...
	I0127 13:14:40.614708  402825 start.go:128] duration metric: took 23.951117831s to createHost
	I0127 13:14:40.614739  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:40.616847  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.617115  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.617143  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.617276  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:40.617458  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.617654  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.617807  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:40.617968  402825 main.go:141] libmachine: Using SSH client type: native
	I0127 13:14:40.618140  402825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 13:14:40.618151  402825 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:14:40.727033  402825 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737983680.696301563
	
	I0127 13:14:40.727053  402825 fix.go:216] guest clock: 1737983680.696301563
	I0127 13:14:40.727060  402825 fix.go:229] Guest: 2025-01-27 13:14:40.696301563 +0000 UTC Remote: 2025-01-27 13:14:40.614724947 +0000 UTC m=+48.849860588 (delta=81.576616ms)
	I0127 13:14:40.727083  402825 fix.go:200] guest clock delta is within tolerance: 81.576616ms
	I0127 13:14:40.727090  402825 start.go:83] releasing machines lock for "kubernetes-upgrade-511736", held for 24.063679757s
	I0127 13:14:40.727120  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:40.727397  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetIP
	I0127 13:14:40.730448  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.730824  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.730861  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.731039  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:40.731525  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:40.731719  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:14:40.731843  402825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:14:40.731889  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:40.732150  402825 ssh_runner.go:195] Run: cat /version.json
	I0127 13:14:40.732175  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:14:40.734866  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.735198  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.735236  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.735260  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.735384  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:40.735553  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.735676  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:40.735693  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:40.735732  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:40.735840  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:14:40.735858  402825 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa Username:docker}
	I0127 13:14:40.735977  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:14:40.736129  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:14:40.736336  402825 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa Username:docker}
	I0127 13:14:40.823695  402825 ssh_runner.go:195] Run: systemctl --version
	I0127 13:14:40.846746  402825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:14:41.006148  402825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:14:41.012981  402825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:14:41.013059  402825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:14:41.029282  402825 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:14:41.029318  402825 start.go:495] detecting cgroup driver to use...
	I0127 13:14:41.029386  402825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:14:41.045491  402825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:14:41.065302  402825 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:14:41.065363  402825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:14:41.080557  402825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:14:41.093760  402825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:14:41.241792  402825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:14:41.402773  402825 docker.go:233] disabling docker service ...
	I0127 13:14:41.402844  402825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:14:41.418670  402825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:14:41.436063  402825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:14:41.556512  402825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:14:41.675940  402825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:14:41.690720  402825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:14:41.710651  402825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 13:14:41.710726  402825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:14:41.721717  402825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:14:41.721781  402825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:14:41.732482  402825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:14:41.742710  402825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:14:41.752837  402825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:14:41.764079  402825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:14:41.773499  402825 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:14:41.773553  402825 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:14:41.786795  402825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:14:41.796442  402825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:14:41.913407  402825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:14:42.362817  402825 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:14:42.362909  402825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:14:42.371353  402825 start.go:563] Will wait 60s for crictl version
	I0127 13:14:42.371426  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:42.376185  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:14:42.422679  402825 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:14:42.422775  402825 ssh_runner.go:195] Run: crio --version
	I0127 13:14:42.455192  402825 ssh_runner.go:195] Run: crio --version
	I0127 13:14:42.514161  402825 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 13:14:42.515515  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetIP
	I0127 13:14:42.519020  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:42.519495  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:14:42.519526  402825 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:14:42.519774  402825 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:14:42.525766  402825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:14:42.539926  402825 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-511736 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:14:42.540057  402825 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:14:42.540111  402825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:14:42.576231  402825 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 13:14:42.576304  402825 ssh_runner.go:195] Run: which lz4
	I0127 13:14:42.580640  402825 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:14:42.585128  402825 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:14:42.585161  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 13:14:44.313835  402825 crio.go:462] duration metric: took 1.733227135s to copy over tarball
	I0127 13:14:44.313926  402825 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:14:46.948673  402825 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634706177s)
	I0127 13:14:46.948711  402825 crio.go:469] duration metric: took 2.634837309s to extract the tarball
	I0127 13:14:46.948723  402825 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:14:46.992999  402825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:14:47.036978  402825 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 13:14:47.037008  402825 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 13:14:47.037084  402825 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:14:47.037106  402825 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:14:47.037160  402825 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 13:14:47.037167  402825 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:14:47.037205  402825 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 13:14:47.037424  402825 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:14:47.037537  402825 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:14:47.037644  402825 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:14:47.038765  402825 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:14:47.038765  402825 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:14:47.039195  402825 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:14:47.039247  402825 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:14:47.039270  402825 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 13:14:47.039391  402825 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:14:47.039419  402825 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:14:47.039500  402825 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 13:14:47.216962  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:14:47.219093  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 13:14:47.224615  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:14:47.225183  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:14:47.232465  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:14:47.237360  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 13:14:47.274719  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 13:14:47.311999  402825 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 13:14:47.312068  402825 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:14:47.312132  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:47.367575  402825 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 13:14:47.367624  402825 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 13:14:47.367686  402825 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:14:47.367632  402825 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:14:47.367764  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:47.367801  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:47.391384  402825 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 13:14:47.391447  402825 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:14:47.391500  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:47.414283  402825 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 13:14:47.414343  402825 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:14:47.414407  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:47.419901  402825 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 13:14:47.419939  402825 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 13:14:47.419981  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:47.419990  402825 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 13:14:47.420025  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:14:47.420039  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:14:47.420032  402825 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 13:14:47.420119  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:14:47.420128  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:14:47.420125  402825 ssh_runner.go:195] Run: which crictl
	I0127 13:14:47.420098  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:14:47.538591  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:14:47.538617  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:14:47.538651  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:14:47.538722  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:14:47.538767  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:14:47.538808  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:14:47.538837  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:14:47.687852  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:14:47.687936  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:14:47.691535  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:14:47.691631  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:14:47.691635  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:14:47.691757  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:14:47.691809  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:14:47.841021  402825 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 13:14:47.841082  402825 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 13:14:47.841109  402825 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 13:14:47.848974  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:14:47.849047  402825 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 13:14:47.849056  402825 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:14:47.849047  402825 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 13:14:47.902844  402825 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 13:14:47.902931  402825 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 13:14:49.880594  402825 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:14:50.023368  402825 cache_images.go:92] duration metric: took 2.98633604s to LoadCachedImages
	W0127 13:14:50.023501  402825 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0127 13:14:50.023525  402825 kubeadm.go:934] updating node { 192.168.50.10 8443 v1.20.0 crio true true} ...
	I0127 13:14:50.023675  402825 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-511736 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:14:50.023775  402825 ssh_runner.go:195] Run: crio config
	I0127 13:14:50.073681  402825 cni.go:84] Creating CNI manager for ""
	I0127 13:14:50.073705  402825 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:14:50.073716  402825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:14:50.073735  402825 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-511736 NodeName:kubernetes-upgrade-511736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 13:14:50.073913  402825 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-511736"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:14:50.073987  402825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 13:14:50.087788  402825 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:14:50.087867  402825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:14:50.100664  402825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0127 13:14:50.120546  402825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:14:50.137433  402825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 13:14:50.154279  402825 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I0127 13:14:50.160144  402825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:14:50.173118  402825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:14:50.316478  402825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:14:50.333220  402825 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736 for IP: 192.168.50.10
	I0127 13:14:50.333251  402825 certs.go:194] generating shared ca certs ...
	I0127 13:14:50.333276  402825 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:14:50.333472  402825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:14:50.333544  402825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:14:50.333562  402825 certs.go:256] generating profile certs ...
	I0127 13:14:50.333654  402825 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/client.key
	I0127 13:14:50.333675  402825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/client.crt with IP's: []
	I0127 13:14:50.526599  402825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/client.crt ...
	I0127 13:14:50.526631  402825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/client.crt: {Name:mk28e1808dcb01b0106ac92d9633ce65e416e7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:14:50.526829  402825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/client.key ...
	I0127 13:14:50.526846  402825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/client.key: {Name:mk02158e6e335a68bf31afefd59af5b59ed0af93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:14:50.526956  402825 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key.08f519f0
	I0127 13:14:50.526976  402825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.crt.08f519f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.10]
	I0127 13:14:50.601189  402825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.crt.08f519f0 ...
	I0127 13:14:50.601230  402825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.crt.08f519f0: {Name:mkfbdccf6e1546468b33a0c76fd4522d9c4e0d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:14:50.634434  402825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key.08f519f0 ...
	I0127 13:14:50.634479  402825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key.08f519f0: {Name:mk02d59e4ff6abae856faf6d7a91c8ada1326bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:14:50.634641  402825 certs.go:381] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.crt.08f519f0 -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.crt
	I0127 13:14:50.634772  402825 certs.go:385] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key.08f519f0 -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key
	I0127 13:14:50.634877  402825 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.key
	I0127 13:14:50.634906  402825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.crt with IP's: []
	I0127 13:14:50.780050  402825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.crt ...
	I0127 13:14:50.780090  402825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.crt: {Name:mk519d35416258b16a1bdac16763565da6dfdcc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:14:50.780292  402825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.key ...
	I0127 13:14:50.780321  402825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.key: {Name:mkbbe70d080a82a5e547722fd17cfaf0acfb111d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:14:50.780531  402825 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:14:50.780585  402825 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:14:50.780608  402825 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:14:50.780644  402825 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:14:50.780680  402825 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:14:50.780716  402825 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:14:50.780774  402825 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:14:50.781409  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:14:50.809503  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:14:50.834404  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:14:50.859240  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:14:50.887120  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 13:14:50.913085  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:14:50.938439  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:14:50.961503  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:14:50.984693  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:14:51.008077  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:14:51.031067  402825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:14:51.053986  402825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:14:51.070241  402825 ssh_runner.go:195] Run: openssl version
	I0127 13:14:51.076019  402825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:14:51.087013  402825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:14:51.091890  402825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:14:51.091952  402825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:14:51.097966  402825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:14:51.108946  402825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:14:51.120264  402825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:14:51.125311  402825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:14:51.125374  402825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:14:51.131444  402825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:14:51.150182  402825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:14:51.165695  402825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:14:51.171438  402825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:14:51.171499  402825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:14:51.179937  402825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:14:51.194409  402825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:14:51.199871  402825 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 13:14:51.199940  402825 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-511736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:14:51.200061  402825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:14:51.200122  402825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:14:51.259757  402825 cri.go:89] found id: ""
	I0127 13:14:51.259836  402825 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:14:51.277692  402825 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:14:51.288427  402825 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:14:51.298005  402825 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:14:51.298028  402825 kubeadm.go:157] found existing configuration files:
	
	I0127 13:14:51.298073  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:14:51.307951  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:14:51.308008  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:14:51.317520  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:14:51.327274  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:14:51.327324  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:14:51.336910  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:14:51.346122  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:14:51.346181  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:14:51.355378  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:14:51.364370  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:14:51.364431  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:14:51.373809  402825 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:14:51.674745  402825 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:16:49.224834  402825 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:16:49.225028  402825 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:16:49.226521  402825 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:16:49.226656  402825 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:16:49.226848  402825 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:16:49.227105  402825 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:16:49.227370  402825 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:16:49.227553  402825 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:16:49.229212  402825 out.go:235]   - Generating certificates and keys ...
	I0127 13:16:49.229311  402825 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:16:49.229416  402825 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:16:49.229523  402825 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 13:16:49.229610  402825 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 13:16:49.229693  402825 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 13:16:49.229790  402825 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 13:16:49.229883  402825 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 13:16:49.230056  402825 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-511736 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	I0127 13:16:49.230135  402825 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 13:16:49.230421  402825 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-511736 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	I0127 13:16:49.230478  402825 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 13:16:49.230529  402825 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 13:16:49.230614  402825 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 13:16:49.230662  402825 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:16:49.230750  402825 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:16:49.230840  402825 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:16:49.230929  402825 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:16:49.231014  402825 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:16:49.231162  402825 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:16:49.231289  402825 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:16:49.231360  402825 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:16:49.231453  402825 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:16:49.232835  402825 out.go:235]   - Booting up control plane ...
	I0127 13:16:49.232910  402825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:16:49.232988  402825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:16:49.233057  402825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:16:49.233149  402825 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:16:49.233283  402825 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:16:49.233358  402825 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:16:49.233461  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:16:49.233710  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:16:49.233809  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:16:49.234078  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:16:49.234171  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:16:49.234451  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:16:49.234604  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:16:49.234764  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:16:49.234822  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:16:49.235009  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:16:49.235028  402825 kubeadm.go:310] 
	I0127 13:16:49.235089  402825 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:16:49.235164  402825 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:16:49.235184  402825 kubeadm.go:310] 
	I0127 13:16:49.235243  402825 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:16:49.235281  402825 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:16:49.235434  402825 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:16:49.235445  402825 kubeadm.go:310] 
	I0127 13:16:49.235582  402825 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:16:49.235634  402825 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:16:49.235698  402825 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:16:49.235711  402825 kubeadm.go:310] 
	I0127 13:16:49.235871  402825 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:16:49.235994  402825 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:16:49.236005  402825 kubeadm.go:310] 
	I0127 13:16:49.236150  402825 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:16:49.236280  402825 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:16:49.236394  402825 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:16:49.236470  402825 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:16:49.236501  402825 kubeadm.go:310] 
	W0127 13:16:49.236633  402825 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-511736 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-511736 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-511736 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-511736 localhost] and IPs [192.168.50.10 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:16:49.236695  402825 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:16:50.795025  402825 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.55829942s)
	I0127 13:16:50.795133  402825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:16:50.809427  402825 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:16:50.819057  402825 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:16:50.819082  402825 kubeadm.go:157] found existing configuration files:
	
	I0127 13:16:50.819136  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:16:50.828248  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:16:50.828304  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:16:50.838014  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:16:50.846932  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:16:50.846988  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:16:50.856720  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:16:50.865845  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:16:50.865902  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:16:50.875070  402825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:16:50.883811  402825 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:16:50.883856  402825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:16:50.892825  402825 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:16:50.968200  402825 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:16:50.968268  402825 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:16:51.120883  402825 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:16:51.121054  402825 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:16:51.121224  402825 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:16:51.304567  402825 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:16:51.306480  402825 out.go:235]   - Generating certificates and keys ...
	I0127 13:16:51.306639  402825 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:16:51.306743  402825 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:16:51.306857  402825 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:16:51.306953  402825 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:16:51.307062  402825 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:16:51.307158  402825 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:16:51.307273  402825 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:16:51.307357  402825 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:16:51.307484  402825 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:16:51.307590  402825 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:16:51.307645  402825 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:16:51.307743  402825 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:16:51.662619  402825 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:16:51.814268  402825 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:16:52.052758  402825 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:16:52.246201  402825 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:16:52.261079  402825 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:16:52.262366  402825 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:16:52.262446  402825 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:16:52.426636  402825 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:16:52.428630  402825 out.go:235]   - Booting up control plane ...
	I0127 13:16:52.428771  402825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:16:52.442814  402825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:16:52.445072  402825 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:16:52.446938  402825 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:16:52.449026  402825 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:17:32.452519  402825 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:17:32.452719  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:17:32.452962  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:17:37.453597  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:17:37.453829  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:17:47.454463  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:17:47.454674  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:18:07.453687  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:18:07.453884  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:18:47.453485  402825 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:18:47.453668  402825 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:18:47.453676  402825 kubeadm.go:310] 
	I0127 13:18:47.453717  402825 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:18:47.453756  402825 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:18:47.453768  402825 kubeadm.go:310] 
	I0127 13:18:47.453817  402825 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:18:47.453857  402825 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:18:47.453992  402825 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:18:47.454011  402825 kubeadm.go:310] 
	I0127 13:18:47.454126  402825 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:18:47.454180  402825 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:18:47.454226  402825 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:18:47.454242  402825 kubeadm.go:310] 
	I0127 13:18:47.454392  402825 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:18:47.454500  402825 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:18:47.454511  402825 kubeadm.go:310] 
	I0127 13:18:47.454668  402825 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:18:47.454790  402825 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:18:47.454902  402825 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:18:47.455026  402825 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:18:47.455061  402825 kubeadm.go:310] 
	I0127 13:18:47.456201  402825 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:18:47.456295  402825 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:18:47.456366  402825 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:18:47.456465  402825 kubeadm.go:394] duration metric: took 3m56.256530898s to StartCluster
	I0127 13:18:47.456521  402825 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:18:47.456590  402825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:18:47.514746  402825 cri.go:89] found id: ""
	I0127 13:18:47.514775  402825 logs.go:282] 0 containers: []
	W0127 13:18:47.514784  402825 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:18:47.514790  402825 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:18:47.514860  402825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:18:47.558262  402825 cri.go:89] found id: ""
	I0127 13:18:47.558289  402825 logs.go:282] 0 containers: []
	W0127 13:18:47.558296  402825 logs.go:284] No container was found matching "etcd"
	I0127 13:18:47.558303  402825 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:18:47.558376  402825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:18:47.594932  402825 cri.go:89] found id: ""
	I0127 13:18:47.594966  402825 logs.go:282] 0 containers: []
	W0127 13:18:47.594980  402825 logs.go:284] No container was found matching "coredns"
	I0127 13:18:47.594988  402825 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:18:47.595067  402825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:18:47.638137  402825 cri.go:89] found id: ""
	I0127 13:18:47.638169  402825 logs.go:282] 0 containers: []
	W0127 13:18:47.638181  402825 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:18:47.638189  402825 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:18:47.638256  402825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:18:47.684676  402825 cri.go:89] found id: ""
	I0127 13:18:47.684711  402825 logs.go:282] 0 containers: []
	W0127 13:18:47.684725  402825 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:18:47.684734  402825 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:18:47.684794  402825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:18:47.720296  402825 cri.go:89] found id: ""
	I0127 13:18:47.720327  402825 logs.go:282] 0 containers: []
	W0127 13:18:47.720357  402825 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:18:47.720367  402825 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:18:47.720449  402825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:18:47.761407  402825 cri.go:89] found id: ""
	I0127 13:18:47.761445  402825 logs.go:282] 0 containers: []
	W0127 13:18:47.761457  402825 logs.go:284] No container was found matching "kindnet"
	I0127 13:18:47.761473  402825 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:18:47.761491  402825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:18:47.910851  402825 logs.go:123] Gathering logs for container status ...
	I0127 13:18:47.910898  402825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:18:47.960884  402825 logs.go:123] Gathering logs for kubelet ...
	I0127 13:18:47.960922  402825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:18:48.022709  402825 logs.go:123] Gathering logs for dmesg ...
	I0127 13:18:48.022747  402825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:18:48.039571  402825 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:18:48.039613  402825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:18:48.166370  402825 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0127 13:18:48.166405  402825 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:18:48.166474  402825 out.go:270] * 
	* 
	W0127 13:18:48.166559  402825 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:18:48.166582  402825 out.go:270] * 
	* 
	W0127 13:18:48.167897  402825 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:18:48.171314  402825 out.go:201] 
	W0127 13:18:48.172556  402825 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:18:48.172606  402825 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:18:48.172633  402825 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:18:48.173984  402825 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-511736
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-511736: (6.334569265s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-511736 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-511736 status --format={{.Host}}: exit status 7 (76.912496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.316081043s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-511736 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.726963ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-511736] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-511736
	    minikube start -p kubernetes-upgrade-511736 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5117362 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-511736 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0127 13:19:39.547541  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-511736 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.477826955s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-27 13:20:37.589972185 +0000 UTC m=+4134.286812995
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-511736 -n kubernetes-upgrade-511736
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-511736 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-511736 logs -n 25: (1.608338022s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-392035 sudo           | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| start   | -p pause-715621 --memory=2048         | pause-715621              | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:17 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-392035                | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p NoKubernetes-392035                | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-413928             | running-upgrade-413928    | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p cert-expiration-180143             | cert-expiration-180143    | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-392035 sudo           | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-392035                | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p force-systemd-flag-268206          | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-715621                       | pause-715621              | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:19 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-268206 ssh cat     | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-268206          | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	| start   | -p stopped-upgrade-619602             | minikube                  | jenkins | v1.26.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:19 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-511736          | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	| start   | -p kubernetes-upgrade-511736          | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:19 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-715621                       | pause-715621              | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:19 UTC |
	| start   | -p cert-options-324444                | cert-options-324444       | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:20 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-619602 stop           | minikube                  | jenkins | v1.26.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:19 UTC |
	| start   | -p stopped-upgrade-619602             | stopped-upgrade-619602    | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-511736          | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-511736          | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:19 UTC | 27 Jan 25 13:20 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-324444 ssh               | cert-options-324444       | jenkins | v1.35.0 | 27 Jan 25 13:20 UTC | 27 Jan 25 13:20 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-324444 -- sudo        | cert-options-324444       | jenkins | v1.35.0 | 27 Jan 25 13:20 UTC | 27 Jan 25 13:20 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-324444                | cert-options-324444       | jenkins | v1.35.0 | 27 Jan 25 13:20 UTC | 27 Jan 25 13:20 UTC |
	| start   | -p auto-211629 --memory=3072          | auto-211629               | jenkins | v1.35.0 | 27 Jan 25 13:20 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:20:09
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:20:09.276877  411032 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:20:09.277006  411032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:20:09.277020  411032 out.go:358] Setting ErrFile to fd 2...
	I0127 13:20:09.277027  411032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:20:09.277315  411032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:20:09.278097  411032 out.go:352] Setting JSON to false
	I0127 13:20:09.279515  411032 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21749,"bootTime":1737962260,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:20:09.279665  411032 start.go:139] virtualization: kvm guest
	I0127 13:20:09.282256  411032 out.go:177] * [auto-211629] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:20:09.283755  411032 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:20:09.283758  411032 notify.go:220] Checking for updates...
	I0127 13:20:09.285425  411032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:20:09.287007  411032 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:20:09.288459  411032 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:20:09.290055  411032 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:20:09.291510  411032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:20:09.293540  411032 config.go:182] Loaded profile config "cert-expiration-180143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:20:09.293685  411032 config.go:182] Loaded profile config "kubernetes-upgrade-511736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:20:09.293813  411032 config.go:182] Loaded profile config "stopped-upgrade-619602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 13:20:09.293950  411032 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:20:09.342037  411032 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 13:20:09.343593  411032 start.go:297] selected driver: kvm2
	I0127 13:20:09.343616  411032 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:20:09.343632  411032 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:20:09.344667  411032 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:20:09.344766  411032 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:20:09.364855  411032 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:20:09.364930  411032 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:20:09.365302  411032 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:20:09.365370  411032 cni.go:84] Creating CNI manager for ""
	I0127 13:20:09.365442  411032 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:20:09.365454  411032 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 13:20:09.365532  411032 start.go:340] cluster config:
	{Name:auto-211629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-211629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0127 13:20:09.365681  411032 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:20:09.367800  411032 out.go:177] * Starting "auto-211629" primary control-plane node in "auto-211629" cluster
	I0127 13:20:09.826277  410467 crio.go:462] duration metric: took 1.604690861s to copy over tarball
	I0127 13:20:09.826368  410467 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:20:12.689837  410467 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.863431018s)
	I0127 13:20:12.689873  410467 crio.go:469] duration metric: took 2.863556132s to extract the tarball
	I0127 13:20:12.689884  410467 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:20:12.736481  410467 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:20:12.767683  410467 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0127 13:20:12.767723  410467 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 13:20:12.767816  410467 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:20:12.767823  410467 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 13:20:12.767897  410467 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:20:12.767933  410467 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 13:20:12.767829  410467 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 13:20:12.768037  410467 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0127 13:20:12.767835  410467 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 13:20:12.767987  410467 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:20:12.769344  410467 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:20:12.769365  410467 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 13:20:12.769345  410467 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 13:20:12.769426  410467 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:20:12.769425  410467 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 13:20:12.769425  410467 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0127 13:20:12.769444  410467 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 13:20:12.769581  410467 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:20:12.938322  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 13:20:12.971399  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0127 13:20:12.978329  410467 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0127 13:20:12.978377  410467 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 13:20:12.978428  410467 ssh_runner.go:195] Run: which crictl
	I0127 13:20:12.994554  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0127 13:20:13.011831  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 13:20:13.011835  410467 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0127 13:20:13.011945  410467 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0127 13:20:13.011970  410467 ssh_runner.go:195] Run: which crictl
	I0127 13:20:13.044683  410467 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0127 13:20:13.044740  410467 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0127 13:20:13.044784  410467 ssh_runner.go:195] Run: which crictl
	I0127 13:20:13.044787  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:20:13.055357  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 13:20:13.085306  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0127 13:20:13.085466  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 13:20:13.085531  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:20:13.092043  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:20:13.092054  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0127 13:20:13.097650  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0127 13:20:13.104121  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0127 13:20:13.240419  410467 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0127 13:20:13.240474  410467 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0127 13:20:13.240532  410467 ssh_runner.go:195] Run: which crictl
	I0127 13:20:13.240558  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 13:20:13.240578  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0127 13:20:13.240631  410467 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0127 13:20:13.240669  410467 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:20:13.240639  410467 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0127 13:20:13.240704  410467 ssh_runner.go:195] Run: which crictl
	I0127 13:20:13.240711  410467 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0127 13:20:13.240716  410467 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0127 13:20:13.240758  410467 ssh_runner.go:195] Run: which crictl
	I0127 13:20:13.257309  410467 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0127 13:20:13.257390  410467 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0127 13:20:13.257422  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 13:20:13.257432  410467 ssh_runner.go:195] Run: which crictl
	I0127 13:20:13.292879  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:20:13.292879  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0127 13:20:13.292968  410467 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0127 13:20:13.292973  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:20:13.293033  410467 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:20:13.329866  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 13:20:13.329955  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 13:20:13.359062  410467 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0127 13:20:13.359140  410467 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0127 13:20:13.359168  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0127 13:20:13.359356  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:20:13.374467  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:20:13.401875  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0127 13:20:13.417851  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 13:20:13.438941  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0127 13:20:09.369166  411032 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:20:09.369214  411032 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:20:09.369227  411032 cache.go:56] Caching tarball of preloaded images
	I0127 13:20:09.369363  411032 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:20:09.369389  411032 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:20:09.369536  411032 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/config.json ...
	I0127 13:20:09.369566  411032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/config.json: {Name:mka70d11a6d9e388a8dda7fbe23ece1d82ce396f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:20:09.369761  411032 start.go:360] acquireMachinesLock for auto-211629: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:20:14.123462  411032 start.go:364] duration metric: took 4.753664517s to acquireMachinesLock for "auto-211629"
	I0127 13:20:14.123545  411032 start.go:93] Provisioning new machine with config: &{Name:auto-211629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:auto-211629 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:20:14.123674  411032 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 13:20:13.866181  410617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:20:13.866212  410617 machine.go:96] duration metric: took 7.093600791s to provisionDockerMachine
	I0127 13:20:13.866230  410617 start.go:293] postStartSetup for "kubernetes-upgrade-511736" (driver="kvm2")
	I0127 13:20:13.866245  410617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:20:13.866269  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:20:13.866632  410617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:20:13.866669  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:20:13.870213  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:13.870698  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:20:13.870730  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:13.870917  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:20:13.871103  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:20:13.871301  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:20:13.871452  410617 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa Username:docker}
	I0127 13:20:13.962026  410617 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:20:13.967614  410617 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:20:13.967644  410617 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:20:13.967724  410617 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:20:13.967825  410617 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:20:13.967946  410617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:20:13.980810  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:20:14.014943  410617 start.go:296] duration metric: took 148.693828ms for postStartSetup
	I0127 13:20:14.014991  410617 fix.go:56] duration metric: took 7.270923221s for fixHost
	I0127 13:20:14.015019  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:20:14.018519  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.019002  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:20:14.019036  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.019247  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:20:14.019451  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:20:14.019639  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:20:14.019797  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:20:14.019978  410617 main.go:141] libmachine: Using SSH client type: native
	I0127 13:20:14.020273  410617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.10 22 <nil> <nil>}
	I0127 13:20:14.020297  410617 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:20:14.123291  410617 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984014.116134017
	
	I0127 13:20:14.123319  410617 fix.go:216] guest clock: 1737984014.116134017
	I0127 13:20:14.123329  410617 fix.go:229] Guest: 2025-01-27 13:20:14.116134017 +0000 UTC Remote: 2025-01-27 13:20:14.014996331 +0000 UTC m=+34.898564118 (delta=101.137686ms)
	I0127 13:20:14.123354  410617 fix.go:200] guest clock delta is within tolerance: 101.137686ms
	I0127 13:20:14.123361  410617 start.go:83] releasing machines lock for "kubernetes-upgrade-511736", held for 7.379320135s
	I0127 13:20:14.123393  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:20:14.123690  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetIP
	I0127 13:20:14.126469  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.126881  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:20:14.126907  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.127085  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:20:14.127523  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:20:14.127717  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:20:14.127798  410617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:20:14.127849  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:20:14.127905  410617 ssh_runner.go:195] Run: cat /version.json
	I0127 13:20:14.127929  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHHostname
	I0127 13:20:14.130510  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.130806  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.130917  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:20:14.130950  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.131073  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:20:14.131203  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:20:14.131227  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:14.131237  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:20:14.131422  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:20:14.131426  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHPort
	I0127 13:20:14.131613  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHKeyPath
	I0127 13:20:14.131629  410617 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa Username:docker}
	I0127 13:20:14.131773  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetSSHUsername
	I0127 13:20:14.131965  410617 sshutil.go:53] new ssh client: &{IP:192.168.50.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/kubernetes-upgrade-511736/id_rsa Username:docker}
	I0127 13:20:14.229357  410617 ssh_runner.go:195] Run: systemctl --version
	I0127 13:20:14.235620  410617 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:20:14.391733  410617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:20:14.399960  410617 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:20:14.400022  410617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:20:14.410524  410617 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 13:20:14.410563  410617 start.go:495] detecting cgroup driver to use...
	I0127 13:20:14.410622  410617 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:20:14.428536  410617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:20:14.443403  410617 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:20:14.443479  410617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:20:14.499874  410617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:20:14.522975  410617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:20:14.718448  410617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:20:14.876327  410617 docker.go:233] disabling docker service ...
	I0127 13:20:14.876391  410617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:20:14.899019  410617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:20:14.919253  410617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:20:15.111327  410617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:20:15.356206  410617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:20:15.464911  410617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:20:15.568413  410617 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:20:15.568495  410617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:20:15.608683  410617 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:20:15.608774  410617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:20:15.639355  410617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:20:15.684283  410617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:20:15.710582  410617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:20:15.751775  410617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:20:15.777118  410617 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:20:15.803751  410617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:20:15.831845  410617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:20:15.851755  410617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:20:15.863632  410617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:20:16.056392  410617 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:20:16.757987  410617 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:20:16.758070  410617 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:20:16.763450  410617 start.go:563] Will wait 60s for crictl version
	I0127 13:20:16.763512  410617 ssh_runner.go:195] Run: which crictl
	I0127 13:20:16.768355  410617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:20:16.811796  410617 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:20:16.811886  410617 ssh_runner.go:195] Run: crio --version
	I0127 13:20:16.849809  410617 ssh_runner.go:195] Run: crio --version
	I0127 13:20:16.881128  410617 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:20:13.521706  410467 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0127 13:20:13.531369  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0127 13:20:13.538960  410467 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0127 13:20:13.557941  410467 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0127 13:20:13.558083  410467 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:20:13.607813  410467 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0127 13:20:13.607950  410467 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0127 13:20:13.633876  410467 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0127 13:20:13.634009  410467 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0127 13:20:13.634052  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0127 13:20:13.639286  410467 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0127 13:20:13.639318  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0127 13:20:13.720622  410467 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0127 13:20:13.720704  410467 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0127 13:20:15.111468  410467 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:20:16.452888  410467 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.732146232s)
	I0127 13:20:16.452939  410467 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0127 13:20:16.452959  410467 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.341452047s)
	I0127 13:20:16.452989  410467 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:20:16.453073  410467 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0127 13:20:16.901495  410467 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0127 13:20:16.901539  410467 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:20:16.901584  410467 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0127 13:20:16.882478  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetIP
	I0127 13:20:16.885293  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:16.885699  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f9:6f", ip: ""} in network mk-kubernetes-upgrade-511736: {Iface:virbr2 ExpiryTime:2025-01-27 14:14:32 +0000 UTC Type:0 Mac:52:54:00:c4:f9:6f Iaid: IPaddr:192.168.50.10 Prefix:24 Hostname:kubernetes-upgrade-511736 Clientid:01:52:54:00:c4:f9:6f}
	I0127 13:20:16.885729  410617 main.go:141] libmachine: (kubernetes-upgrade-511736) DBG | domain kubernetes-upgrade-511736 has defined IP address 192.168.50.10 and MAC address 52:54:00:c4:f9:6f in network mk-kubernetes-upgrade-511736
	I0127 13:20:16.885902  410617 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:20:16.890241  410617 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-511736 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:20:16.890408  410617 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:20:16.890458  410617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:20:16.945160  410617 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:20:16.945193  410617 crio.go:433] Images already preloaded, skipping extraction
	I0127 13:20:16.945258  410617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:20:16.987257  410617 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:20:16.987283  410617 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:20:16.987293  410617 kubeadm.go:934] updating node { 192.168.50.10 8443 v1.32.1 crio true true} ...
	I0127 13:20:16.987428  410617 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-511736 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:20:16.987508  410617 ssh_runner.go:195] Run: crio config
	I0127 13:20:17.053834  410617 cni.go:84] Creating CNI manager for ""
	I0127 13:20:17.053861  410617 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:20:17.053875  410617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:20:17.053908  410617 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.10 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-511736 NodeName:kubernetes-upgrade-511736 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:20:17.054092  410617 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-511736"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:20:17.054177  410617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:20:17.067037  410617 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:20:17.067187  410617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:20:17.082556  410617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0127 13:20:17.102161  410617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:20:17.120732  410617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0127 13:20:17.140072  410617 ssh_runner.go:195] Run: grep 192.168.50.10	control-plane.minikube.internal$ /etc/hosts
	I0127 13:20:17.145108  410617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:20:17.340796  410617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:20:17.360567  410617 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736 for IP: 192.168.50.10
	I0127 13:20:17.360585  410617 certs.go:194] generating shared ca certs ...
	I0127 13:20:17.360602  410617 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:20:17.360787  410617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:20:17.360851  410617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:20:17.360863  410617 certs.go:256] generating profile certs ...
	I0127 13:20:17.360939  410617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/client.key
	I0127 13:20:17.360984  410617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key.08f519f0
	I0127 13:20:17.361024  410617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.key
	I0127 13:20:17.361151  410617 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:20:17.361187  410617 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:20:17.361203  410617 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:20:17.361237  410617 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:20:17.361271  410617 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:20:17.361303  410617 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:20:17.361352  410617 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:20:17.361918  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:20:17.390919  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:20:17.420777  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:20:17.447707  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:20:17.474171  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 13:20:17.500555  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:20:17.530766  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:20:17.603278  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:20:17.641026  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:20:17.800333  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:20:17.958828  410617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:20:18.253319  410617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:20:18.306314  410617 ssh_runner.go:195] Run: openssl version
	I0127 13:20:18.326833  410617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:20:18.472146  410617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:20:18.509284  410617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:20:18.509365  410617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:20:18.538746  410617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:20:18.603694  410617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:20:18.651452  410617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:20:18.678580  410617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:20:18.678664  410617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:20:18.712438  410617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:20:18.760044  410617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:20:18.830051  410617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:20:18.854610  410617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:20:18.854715  410617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:20:18.883789  410617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:20:18.939766  410617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:20:18.962794  410617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:20:18.978583  410617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:20:18.998991  410617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:20:19.008991  410617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:20:19.019153  410617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:20:19.030047  410617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:20:19.041926  410617 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-511736 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:20:19.042047  410617 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:20:19.042135  410617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:20:19.116418  410617 cri.go:89] found id: "86d04392445230790a32cbc881c892a16bb6aeb06d291a7e6b619e4667c6449f"
	I0127 13:20:19.116450  410617 cri.go:89] found id: "6c70f8390b9e6a5592d083189c464c404c726f19b28fa31e38781e5be2685476"
	I0127 13:20:19.116458  410617 cri.go:89] found id: "8624ea2a59349bacedf40b4abbb3ab02a30c2d873c1f3a2337a89e64cb6a0028"
	I0127 13:20:19.116477  410617 cri.go:89] found id: "3ace47a6bb0335c64bd557fe9bf459541939a4198b27abe42dbb60bda6b3a1bd"
	I0127 13:20:19.116481  410617 cri.go:89] found id: "cb5ddf2db957b03e2f3281a0b7c418cc4f278d9ef2e29ff913bcbc63142861a2"
	I0127 13:20:19.116486  410617 cri.go:89] found id: "8fe78a746b52cb5353817522307cbea9850a26aee0aa863b500c98aeaa7a6bb9"
	I0127 13:20:19.116489  410617 cri.go:89] found id: "e1b0af297bce95a89c775bd927ac2bd5acd738532d9f0d4bb25ee46b31b910f4"
	I0127 13:20:19.116493  410617 cri.go:89] found id: "942ce31f868a2f13293d0e9c2d85744876d90f69e3277cc5ef54bdb8efa154b2"
	I0127 13:20:19.116497  410617 cri.go:89] found id: "c35d1d335c5679062283240184ba9d4503371bd5aa1e70057309430f8ef5afba"
	I0127 13:20:19.116506  410617 cri.go:89] found id: "6cde4b6852282cc424ee4f27186c6320a4cb2973820436c6ebecddd1afe222ef"
	I0127 13:20:19.116514  410617 cri.go:89] found id: "a6aed61c03f94fedc21336b520fa93de75b2bb110f942a3e9b461656cd3387f4"
	I0127 13:20:19.116518  410617 cri.go:89] found id: ""
	I0127 13:20:19.116574  410617 ssh_runner.go:195] Run: sudo runc list -f json
	I0127 13:20:14.481045  411032 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 13:20:14.481290  411032 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:20:14.481366  411032 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:20:14.497887  411032 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0127 13:20:14.498416  411032 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:20:14.499044  411032 main.go:141] libmachine: Using API Version  1
	I0127 13:20:14.499074  411032 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:20:14.499426  411032 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:20:14.499684  411032 main.go:141] libmachine: (auto-211629) Calling .GetMachineName
	I0127 13:20:14.499892  411032 main.go:141] libmachine: (auto-211629) Calling .DriverName
	I0127 13:20:14.500058  411032 start.go:159] libmachine.API.Create for "auto-211629" (driver="kvm2")
	I0127 13:20:14.500098  411032 client.go:168] LocalClient.Create starting
	I0127 13:20:14.500136  411032 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem
	I0127 13:20:14.500176  411032 main.go:141] libmachine: Decoding PEM data...
	I0127 13:20:14.500197  411032 main.go:141] libmachine: Parsing certificate...
	I0127 13:20:14.500261  411032 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem
	I0127 13:20:14.500283  411032 main.go:141] libmachine: Decoding PEM data...
	I0127 13:20:14.500291  411032 main.go:141] libmachine: Parsing certificate...
	I0127 13:20:14.500306  411032 main.go:141] libmachine: Running pre-create checks...
	I0127 13:20:14.500330  411032 main.go:141] libmachine: (auto-211629) Calling .PreCreateCheck
	I0127 13:20:14.500847  411032 main.go:141] libmachine: (auto-211629) Calling .GetConfigRaw
	I0127 13:20:14.501415  411032 main.go:141] libmachine: Creating machine...
	I0127 13:20:14.501428  411032 main.go:141] libmachine: (auto-211629) Calling .Create
	I0127 13:20:14.501620  411032 main.go:141] libmachine: (auto-211629) creating KVM machine...
	I0127 13:20:14.501633  411032 main.go:141] libmachine: (auto-211629) creating network...
	I0127 13:20:14.503641  411032 main.go:141] libmachine: (auto-211629) DBG | found existing default KVM network
	I0127 13:20:14.506799  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:14.506618  411111 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 13:20:14.508129  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:14.508021  411111 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:42:94:02} reservation:<nil>}
	I0127 13:20:14.509619  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:14.509473  411111 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000115fb0}
	I0127 13:20:14.509650  411032 main.go:141] libmachine: (auto-211629) DBG | created network xml: 
	I0127 13:20:14.509663  411032 main.go:141] libmachine: (auto-211629) DBG | <network>
	I0127 13:20:14.509679  411032 main.go:141] libmachine: (auto-211629) DBG |   <name>mk-auto-211629</name>
	I0127 13:20:14.509689  411032 main.go:141] libmachine: (auto-211629) DBG |   <dns enable='no'/>
	I0127 13:20:14.509708  411032 main.go:141] libmachine: (auto-211629) DBG |   
	I0127 13:20:14.509721  411032 main.go:141] libmachine: (auto-211629) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 13:20:14.509735  411032 main.go:141] libmachine: (auto-211629) DBG |     <dhcp>
	I0127 13:20:14.509751  411032 main.go:141] libmachine: (auto-211629) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 13:20:14.509762  411032 main.go:141] libmachine: (auto-211629) DBG |     </dhcp>
	I0127 13:20:14.509775  411032 main.go:141] libmachine: (auto-211629) DBG |   </ip>
	I0127 13:20:14.509784  411032 main.go:141] libmachine: (auto-211629) DBG |   
	I0127 13:20:14.509796  411032 main.go:141] libmachine: (auto-211629) DBG | </network>
	I0127 13:20:14.509809  411032 main.go:141] libmachine: (auto-211629) DBG | 
	I0127 13:20:14.753981  411032 main.go:141] libmachine: (auto-211629) DBG | trying to create private KVM network mk-auto-211629 192.168.61.0/24...
	I0127 13:20:14.832837  411032 main.go:141] libmachine: (auto-211629) setting up store path in /home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629 ...
	I0127 13:20:14.832866  411032 main.go:141] libmachine: (auto-211629) DBG | private KVM network mk-auto-211629 192.168.61.0/24 created
	I0127 13:20:14.832878  411032 main.go:141] libmachine: (auto-211629) building disk image from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 13:20:14.832898  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:14.832779  411111 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:20:14.832964  411032 main.go:141] libmachine: (auto-211629) Downloading /home/jenkins/minikube-integration/20317-361578/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 13:20:15.188502  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:15.188356  411111 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629/id_rsa...
	I0127 13:20:15.459821  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:15.459674  411111 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629/auto-211629.rawdisk...
	I0127 13:20:15.459856  411032 main.go:141] libmachine: (auto-211629) DBG | Writing magic tar header
	I0127 13:20:15.459873  411032 main.go:141] libmachine: (auto-211629) DBG | Writing SSH key tar header
	I0127 13:20:15.459884  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:15.459851  411111 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629 ...
	I0127 13:20:15.460058  411032 main.go:141] libmachine: (auto-211629) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629 (perms=drwx------)
	I0127 13:20:15.460091  411032 main.go:141] libmachine: (auto-211629) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines (perms=drwxr-xr-x)
	I0127 13:20:15.460103  411032 main.go:141] libmachine: (auto-211629) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629
	I0127 13:20:15.460113  411032 main.go:141] libmachine: (auto-211629) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube (perms=drwxr-xr-x)
	I0127 13:20:15.460125  411032 main.go:141] libmachine: (auto-211629) setting executable bit set on /home/jenkins/minikube-integration/20317-361578 (perms=drwxrwxr-x)
	I0127 13:20:15.460139  411032 main.go:141] libmachine: (auto-211629) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 13:20:15.460151  411032 main.go:141] libmachine: (auto-211629) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 13:20:15.460167  411032 main.go:141] libmachine: (auto-211629) creating domain...
	I0127 13:20:15.460176  411032 main.go:141] libmachine: (auto-211629) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines
	I0127 13:20:15.460189  411032 main.go:141] libmachine: (auto-211629) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:20:15.460201  411032 main.go:141] libmachine: (auto-211629) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578
	I0127 13:20:15.460211  411032 main.go:141] libmachine: (auto-211629) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 13:20:15.460222  411032 main.go:141] libmachine: (auto-211629) DBG | checking permissions on dir: /home/jenkins
	I0127 13:20:15.460230  411032 main.go:141] libmachine: (auto-211629) DBG | checking permissions on dir: /home
	I0127 13:20:15.460246  411032 main.go:141] libmachine: (auto-211629) DBG | skipping /home - not owner
	I0127 13:20:15.461579  411032 main.go:141] libmachine: (auto-211629) define libvirt domain using xml: 
	I0127 13:20:15.461601  411032 main.go:141] libmachine: (auto-211629) <domain type='kvm'>
	I0127 13:20:15.461611  411032 main.go:141] libmachine: (auto-211629)   <name>auto-211629</name>
	I0127 13:20:15.461617  411032 main.go:141] libmachine: (auto-211629)   <memory unit='MiB'>3072</memory>
	I0127 13:20:15.461625  411032 main.go:141] libmachine: (auto-211629)   <vcpu>2</vcpu>
	I0127 13:20:15.461631  411032 main.go:141] libmachine: (auto-211629)   <features>
	I0127 13:20:15.461645  411032 main.go:141] libmachine: (auto-211629)     <acpi/>
	I0127 13:20:15.461652  411032 main.go:141] libmachine: (auto-211629)     <apic/>
	I0127 13:20:15.461659  411032 main.go:141] libmachine: (auto-211629)     <pae/>
	I0127 13:20:15.461667  411032 main.go:141] libmachine: (auto-211629)     
	I0127 13:20:15.461675  411032 main.go:141] libmachine: (auto-211629)   </features>
	I0127 13:20:15.461684  411032 main.go:141] libmachine: (auto-211629)   <cpu mode='host-passthrough'>
	I0127 13:20:15.461691  411032 main.go:141] libmachine: (auto-211629)   
	I0127 13:20:15.461700  411032 main.go:141] libmachine: (auto-211629)   </cpu>
	I0127 13:20:15.461707  411032 main.go:141] libmachine: (auto-211629)   <os>
	I0127 13:20:15.461715  411032 main.go:141] libmachine: (auto-211629)     <type>hvm</type>
	I0127 13:20:15.461723  411032 main.go:141] libmachine: (auto-211629)     <boot dev='cdrom'/>
	I0127 13:20:15.461728  411032 main.go:141] libmachine: (auto-211629)     <boot dev='hd'/>
	I0127 13:20:15.461742  411032 main.go:141] libmachine: (auto-211629)     <bootmenu enable='no'/>
	I0127 13:20:15.461750  411032 main.go:141] libmachine: (auto-211629)   </os>
	I0127 13:20:15.461757  411032 main.go:141] libmachine: (auto-211629)   <devices>
	I0127 13:20:15.461767  411032 main.go:141] libmachine: (auto-211629)     <disk type='file' device='cdrom'>
	I0127 13:20:15.461779  411032 main.go:141] libmachine: (auto-211629)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629/boot2docker.iso'/>
	I0127 13:20:15.461789  411032 main.go:141] libmachine: (auto-211629)       <target dev='hdc' bus='scsi'/>
	I0127 13:20:15.461796  411032 main.go:141] libmachine: (auto-211629)       <readonly/>
	I0127 13:20:15.461804  411032 main.go:141] libmachine: (auto-211629)     </disk>
	I0127 13:20:15.461813  411032 main.go:141] libmachine: (auto-211629)     <disk type='file' device='disk'>
	I0127 13:20:15.461824  411032 main.go:141] libmachine: (auto-211629)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 13:20:15.461838  411032 main.go:141] libmachine: (auto-211629)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/auto-211629/auto-211629.rawdisk'/>
	I0127 13:20:15.461857  411032 main.go:141] libmachine: (auto-211629)       <target dev='hda' bus='virtio'/>
	I0127 13:20:15.461864  411032 main.go:141] libmachine: (auto-211629)     </disk>
	I0127 13:20:15.461877  411032 main.go:141] libmachine: (auto-211629)     <interface type='network'>
	I0127 13:20:15.461885  411032 main.go:141] libmachine: (auto-211629)       <source network='mk-auto-211629'/>
	I0127 13:20:15.461894  411032 main.go:141] libmachine: (auto-211629)       <model type='virtio'/>
	I0127 13:20:15.461901  411032 main.go:141] libmachine: (auto-211629)     </interface>
	I0127 13:20:15.461910  411032 main.go:141] libmachine: (auto-211629)     <interface type='network'>
	I0127 13:20:15.461918  411032 main.go:141] libmachine: (auto-211629)       <source network='default'/>
	I0127 13:20:15.461924  411032 main.go:141] libmachine: (auto-211629)       <model type='virtio'/>
	I0127 13:20:15.461934  411032 main.go:141] libmachine: (auto-211629)     </interface>
	I0127 13:20:15.461943  411032 main.go:141] libmachine: (auto-211629)     <serial type='pty'>
	I0127 13:20:15.461951  411032 main.go:141] libmachine: (auto-211629)       <target port='0'/>
	I0127 13:20:15.461959  411032 main.go:141] libmachine: (auto-211629)     </serial>
	I0127 13:20:15.461966  411032 main.go:141] libmachine: (auto-211629)     <console type='pty'>
	I0127 13:20:15.461975  411032 main.go:141] libmachine: (auto-211629)       <target type='serial' port='0'/>
	I0127 13:20:15.461982  411032 main.go:141] libmachine: (auto-211629)     </console>
	I0127 13:20:15.461991  411032 main.go:141] libmachine: (auto-211629)     <rng model='virtio'>
	I0127 13:20:15.462000  411032 main.go:141] libmachine: (auto-211629)       <backend model='random'>/dev/random</backend>
	I0127 13:20:15.462008  411032 main.go:141] libmachine: (auto-211629)     </rng>
	I0127 13:20:15.462014  411032 main.go:141] libmachine: (auto-211629)     
	I0127 13:20:15.462029  411032 main.go:141] libmachine: (auto-211629)     
	I0127 13:20:15.462039  411032 main.go:141] libmachine: (auto-211629)   </devices>
	I0127 13:20:15.462044  411032 main.go:141] libmachine: (auto-211629) </domain>
	I0127 13:20:15.462057  411032 main.go:141] libmachine: (auto-211629) 
	I0127 13:20:15.592734  411032 main.go:141] libmachine: (auto-211629) DBG | domain auto-211629 has defined MAC address 52:54:00:92:4b:94 in network default
	I0127 13:20:15.593509  411032 main.go:141] libmachine: (auto-211629) DBG | domain auto-211629 has defined MAC address 52:54:00:dc:8d:d3 in network mk-auto-211629
	I0127 13:20:15.593531  411032 main.go:141] libmachine: (auto-211629) starting domain...
	I0127 13:20:15.593576  411032 main.go:141] libmachine: (auto-211629) ensuring networks are active...
	I0127 13:20:15.594526  411032 main.go:141] libmachine: (auto-211629) Ensuring network default is active
	I0127 13:20:15.594921  411032 main.go:141] libmachine: (auto-211629) Ensuring network mk-auto-211629 is active
	I0127 13:20:15.620940  411032 main.go:141] libmachine: (auto-211629) getting domain XML...
	I0127 13:20:15.622101  411032 main.go:141] libmachine: (auto-211629) creating domain...
	I0127 13:20:17.352953  411032 main.go:141] libmachine: (auto-211629) waiting for IP...
	I0127 13:20:17.353753  411032 main.go:141] libmachine: (auto-211629) DBG | domain auto-211629 has defined MAC address 52:54:00:dc:8d:d3 in network mk-auto-211629
	I0127 13:20:17.354261  411032 main.go:141] libmachine: (auto-211629) DBG | unable to find current IP address of domain auto-211629 in network mk-auto-211629
	I0127 13:20:17.354330  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:17.354261  411111 retry.go:31] will retry after 311.963281ms: waiting for domain to come up
	I0127 13:20:17.668263  411032 main.go:141] libmachine: (auto-211629) DBG | domain auto-211629 has defined MAC address 52:54:00:dc:8d:d3 in network mk-auto-211629
	I0127 13:20:17.668947  411032 main.go:141] libmachine: (auto-211629) DBG | unable to find current IP address of domain auto-211629 in network mk-auto-211629
	I0127 13:20:17.668977  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:17.668836  411111 retry.go:31] will retry after 353.952271ms: waiting for domain to come up
	I0127 13:20:18.024806  411032 main.go:141] libmachine: (auto-211629) DBG | domain auto-211629 has defined MAC address 52:54:00:dc:8d:d3 in network mk-auto-211629
	I0127 13:20:18.025461  411032 main.go:141] libmachine: (auto-211629) DBG | unable to find current IP address of domain auto-211629 in network mk-auto-211629
	I0127 13:20:18.025493  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:18.025431  411111 retry.go:31] will retry after 433.23761ms: waiting for domain to come up
	I0127 13:20:18.460889  411032 main.go:141] libmachine: (auto-211629) DBG | domain auto-211629 has defined MAC address 52:54:00:dc:8d:d3 in network mk-auto-211629
	I0127 13:20:18.461470  411032 main.go:141] libmachine: (auto-211629) DBG | unable to find current IP address of domain auto-211629 in network mk-auto-211629
	I0127 13:20:18.461500  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:18.461435  411111 retry.go:31] will retry after 500.296404ms: waiting for domain to come up
	I0127 13:20:18.963355  411032 main.go:141] libmachine: (auto-211629) DBG | domain auto-211629 has defined MAC address 52:54:00:dc:8d:d3 in network mk-auto-211629
	I0127 13:20:18.964118  411032 main.go:141] libmachine: (auto-211629) DBG | unable to find current IP address of domain auto-211629 in network mk-auto-211629
	I0127 13:20:18.964153  411032 main.go:141] libmachine: (auto-211629) DBG | I0127 13:20:18.964111  411111 retry.go:31] will retry after 717.279414ms: waiting for domain to come up
	I0127 13:20:19.154108  410467 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.252491169s)
	I0127 13:20:19.154151  410467 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0127 13:20:19.154210  410467 cache_images.go:92] duration metric: took 6.386469032s to LoadCachedImages
	W0127 13:20:19.154325  410467 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0127 13:20:19.154347  410467 kubeadm.go:934] updating node { 192.168.83.52 8443 v1.24.1 crio true true} ...
	I0127 13:20:19.154503  410467 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=stopped-upgrade-619602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-619602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:20:19.154619  410467 ssh_runner.go:195] Run: crio config
	I0127 13:20:19.201961  410467 cni.go:84] Creating CNI manager for ""
	I0127 13:20:19.201997  410467 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:20:19.202011  410467 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:20:19.202043  410467 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.52 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-619602 NodeName:stopped-upgrade-619602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:20:19.202271  410467 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-619602"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:20:19.202366  410467 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0127 13:20:19.211602  410467 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:20:19.211679  410467 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:20:19.220160  410467 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0127 13:20:19.234954  410467 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:20:19.252268  410467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0127 13:20:19.268260  410467 ssh_runner.go:195] Run: grep 192.168.83.52	control-plane.minikube.internal$ /etc/hosts
	I0127 13:20:19.272388  410467 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.52	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:20:19.286109  410467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:20:19.402980  410467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:20:19.417055  410467 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602 for IP: 192.168.83.52
	I0127 13:20:19.417082  410467 certs.go:194] generating shared ca certs ...
	I0127 13:20:19.417108  410467 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:20:19.417319  410467 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:20:19.417387  410467 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:20:19.417407  410467 certs.go:256] generating profile certs ...
	I0127 13:20:19.417526  410467 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/client.key
	I0127 13:20:19.417563  410467 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.key.29be39b4
	I0127 13:20:19.417598  410467 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.crt.29be39b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.52]
	I0127 13:20:19.676053  410467 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.crt.29be39b4 ...
	I0127 13:20:19.676130  410467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.crt.29be39b4: {Name:mkfcdb4f8505072d19ccdf1985a3afb4ac46cf49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:20:19.676468  410467 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.key.29be39b4 ...
	I0127 13:20:19.676506  410467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.key.29be39b4: {Name:mk86204d2118db4523e6b9027312525afd2e8831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:20:19.676654  410467 certs.go:381] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.crt.29be39b4 -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.crt
	I0127 13:20:19.676852  410467 certs.go:385] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.key.29be39b4 -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.key
	I0127 13:20:19.677110  410467 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/proxy-client.key
	I0127 13:20:19.677344  410467 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:20:19.677430  410467 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:20:19.677466  410467 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:20:19.677511  410467 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:20:19.677584  410467 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:20:19.677639  410467 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:20:19.677704  410467 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:20:19.678718  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:20:19.725607  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:20:19.764744  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:20:19.796801  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:20:19.827176  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 13:20:19.852168  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:20:19.874446  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:20:19.896039  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 13:20:19.917404  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:20:19.942752  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:20:19.967373  410467 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:20:19.989800  410467 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:20:20.007745  410467 ssh_runner.go:195] Run: openssl version
	I0127 13:20:20.013055  410467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:20:20.022754  410467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:20:20.027056  410467 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:20:20.027111  410467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:20:20.032035  410467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:20:20.042310  410467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:20:20.051851  410467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:20:20.056302  410467 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:20:20.056368  410467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:20:20.061704  410467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:20:20.070564  410467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:20:20.079367  410467 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:20:20.083543  410467 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:20:20.083593  410467 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:20:20.089302  410467 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:20:20.098671  410467 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:20:20.102963  410467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:20:20.108529  410467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:20:20.114216  410467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:20:20.119574  410467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:20:20.124799  410467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:20:20.130037  410467 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:20:20.135255  410467 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-619602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-619602 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.52 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0127 13:20:20.135329  410467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:20:20.135369  410467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:20:20.169196  410467 cri.go:89] found id: ""
	I0127 13:20:20.169271  410467 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0127 13:20:20.178209  410467 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I0127 13:20:20.178233  410467 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:20:20.178239  410467 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:20:20.178284  410467 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:20:20.188656  410467 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:20:20.189426  410467 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-619602" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:20:20.189840  410467 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-619602" cluster setting kubeconfig missing "stopped-upgrade-619602" context setting]
	I0127 13:20:20.190475  410467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:20:20.191494  410467 kapi.go:59] client config for stopped-upgrade-619602: &rest.Config{Host:"https://192.168.83.52:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/client.crt", KeyFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/profiles/stopped-upgrade-619602/client.key", CAFile:"/home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 13:20:20.192219  410467 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:20:20.201773  410467 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "stopped-upgrade-619602"
	   kubeletExtraArgs:
	     node-ip: 192.168.83.52
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0127 13:20:20.201794  410467 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:20:20.201821  410467 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:20:20.201873  410467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:20:20.238682  410467 cri.go:89] found id: ""
	I0127 13:20:20.238760  410467 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:20:20.258958  410467 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:20:20.268481  410467 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:20:20.268501  410467 kubeadm.go:157] found existing configuration files:
	
	I0127 13:20:20.268548  410467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I0127 13:20:20.277385  410467 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:20:20.277442  410467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:20:20.285992  410467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I0127 13:20:20.293702  410467 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:20:20.293794  410467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:20:20.303437  410467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I0127 13:20:20.313383  410467 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:20:20.313440  410467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:20:20.322105  410467 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I0127 13:20:20.331962  410467 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:20:20.332020  410467 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:20:20.342505  410467 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:20:20.350816  410467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:20:20.473424  410467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:20:21.366818  410467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:20:21.645665  410467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:20:21.720715  410467 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:20:21.789347  410467 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:20:21.789438  410467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:20:22.290334  410467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:20:22.790307  410467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:20:23.290208  410467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-511736 -n kubernetes-upgrade-511736
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-511736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-511736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-511736
--- FAIL: TestKubernetesUpgrade (409.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (88.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-715621 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-715621 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.832041882s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-715621] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-715621" primary control-plane node in "pause-715621" cluster
	* Updating the running kvm2 "pause-715621" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-715621" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:17:37.266776  408369 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:17:37.267038  408369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:17:37.267047  408369 out.go:358] Setting ErrFile to fd 2...
	I0127 13:17:37.267051  408369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:17:37.267232  408369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:17:37.267766  408369 out.go:352] Setting JSON to false
	I0127 13:17:37.268702  408369 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21597,"bootTime":1737962260,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:17:37.268803  408369 start.go:139] virtualization: kvm guest
	I0127 13:17:37.270891  408369 out.go:177] * [pause-715621] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:17:37.272447  408369 notify.go:220] Checking for updates...
	I0127 13:17:37.272471  408369 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:17:37.273765  408369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:17:37.275100  408369 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:17:37.276224  408369 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:17:37.277238  408369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:17:37.278298  408369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:17:37.279637  408369 config.go:182] Loaded profile config "pause-715621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:17:37.280206  408369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:17:37.280301  408369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:17:37.296127  408369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37981
	I0127 13:17:37.296529  408369 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:17:37.297065  408369 main.go:141] libmachine: Using API Version  1
	I0127 13:17:37.297089  408369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:17:37.297413  408369 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:17:37.297609  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:17:37.297848  408369 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:17:37.298262  408369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:17:37.298336  408369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:17:37.312682  408369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0127 13:17:37.313039  408369 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:17:37.313490  408369 main.go:141] libmachine: Using API Version  1
	I0127 13:17:37.313515  408369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:17:37.313801  408369 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:17:37.313995  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:17:37.345835  408369 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:17:37.347082  408369 start.go:297] selected driver: kvm2
	I0127 13:17:37.347100  408369 start.go:901] validating driver "kvm2" against &{Name:pause-715621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-715621 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-polic
y:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:17:37.347297  408369 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:17:37.347754  408369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:17:37.347835  408369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:17:37.362979  408369 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:17:37.364048  408369 cni.go:84] Creating CNI manager for ""
	I0127 13:17:37.364116  408369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:17:37.364199  408369 start.go:340] cluster config:
	{Name:pause-715621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-715621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:17:37.364370  408369 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:17:37.366002  408369 out.go:177] * Starting "pause-715621" primary control-plane node in "pause-715621" cluster
	I0127 13:17:37.367195  408369 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:17:37.367236  408369 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:17:37.367248  408369 cache.go:56] Caching tarball of preloaded images
	I0127 13:17:37.367345  408369 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:17:37.367358  408369 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:17:37.367529  408369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/config.json ...
	I0127 13:17:37.367762  408369 start.go:360] acquireMachinesLock for pause-715621: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:18:17.431477  408369 start.go:364] duration metric: took 40.063681967s to acquireMachinesLock for "pause-715621"
	I0127 13:18:17.431549  408369 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:18:17.431561  408369 fix.go:54] fixHost starting: 
	I0127 13:18:17.431966  408369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:18:17.432019  408369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:18:17.450531  408369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
	I0127 13:18:17.450971  408369 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:18:17.451448  408369 main.go:141] libmachine: Using API Version  1
	I0127 13:18:17.451474  408369 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:18:17.451834  408369 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:18:17.452029  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:18:17.452193  408369 main.go:141] libmachine: (pause-715621) Calling .GetState
	I0127 13:18:17.453744  408369 fix.go:112] recreateIfNeeded on pause-715621: state=Running err=<nil>
	W0127 13:18:17.453763  408369 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:18:17.455339  408369 out.go:177] * Updating the running kvm2 "pause-715621" VM ...
	I0127 13:18:17.456681  408369 machine.go:93] provisionDockerMachine start ...
	I0127 13:18:17.456717  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:18:17.456903  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:17.459096  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.459519  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:17.459545  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.459704  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:17.459868  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:17.460021  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:17.460156  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:17.460308  408369 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:17.460504  408369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0127 13:18:17.460521  408369 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:18:17.575542  408369 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-715621
	
	I0127 13:18:17.575570  408369 main.go:141] libmachine: (pause-715621) Calling .GetMachineName
	I0127 13:18:17.575841  408369 buildroot.go:166] provisioning hostname "pause-715621"
	I0127 13:18:17.575867  408369 main.go:141] libmachine: (pause-715621) Calling .GetMachineName
	I0127 13:18:17.576035  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:17.578995  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.579422  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:17.579454  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.579606  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:17.579786  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:17.579954  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:17.580082  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:17.580241  408369 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:17.580476  408369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0127 13:18:17.580493  408369 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-715621 && echo "pause-715621" | sudo tee /etc/hostname
	I0127 13:18:17.717150  408369 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-715621
	
	I0127 13:18:17.717178  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:17.720095  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.720425  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:17.720448  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.720607  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:17.720790  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:17.720953  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:17.721050  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:17.721207  408369 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:17.721424  408369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0127 13:18:17.721443  408369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-715621' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-715621/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-715621' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:18:17.836550  408369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:18:17.836589  408369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:18:17.836647  408369 buildroot.go:174] setting up certificates
	I0127 13:18:17.836678  408369 provision.go:84] configureAuth start
	I0127 13:18:17.836702  408369 main.go:141] libmachine: (pause-715621) Calling .GetMachineName
	I0127 13:18:17.837017  408369 main.go:141] libmachine: (pause-715621) Calling .GetIP
	I0127 13:18:17.840066  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.840477  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:17.840506  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.840643  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:17.842995  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.843359  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:17.843394  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.843538  408369 provision.go:143] copyHostCerts
	I0127 13:18:17.843594  408369 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:18:17.843604  408369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:18:17.843657  408369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:18:17.843756  408369 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:18:17.843763  408369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:18:17.843783  408369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:18:17.843858  408369 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:18:17.843866  408369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:18:17.843884  408369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:18:17.843944  408369 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.pause-715621 san=[127.0.0.1 192.168.39.99 localhost minikube pause-715621]
	I0127 13:18:17.943445  408369 provision.go:177] copyRemoteCerts
	I0127 13:18:17.943495  408369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:18:17.943527  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:17.946650  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.946974  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:17.946999  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:17.947133  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:17.947320  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:17.947490  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:17.947615  408369 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/pause-715621/id_rsa Username:docker}
	I0127 13:18:18.045499  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:18:18.074811  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 13:18:18.109145  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:18:18.138363  408369 provision.go:87] duration metric: took 301.664886ms to configureAuth
	I0127 13:18:18.138396  408369 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:18:18.138687  408369 config.go:182] Loaded profile config "pause-715621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:18:18.138792  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:18.141564  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:18.141954  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:18.142002  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:18.142252  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:18.142446  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:18.142626  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:18.142763  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:18.142906  408369 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:18.143091  408369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0127 13:18:18.143106  408369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:18:23.669440  408369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:18:23.669475  408369 machine.go:96] duration metric: took 6.212766668s to provisionDockerMachine
	I0127 13:18:23.669493  408369 start.go:293] postStartSetup for "pause-715621" (driver="kvm2")
	I0127 13:18:23.669507  408369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:18:23.669556  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:18:23.669958  408369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:18:23.669997  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:23.673321  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.673755  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:23.673787  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.673997  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:23.674231  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:23.674421  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:23.674656  408369 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/pause-715621/id_rsa Username:docker}
	I0127 13:18:23.764956  408369 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:18:23.769374  408369 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:18:23.769406  408369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:18:23.769488  408369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:18:23.769583  408369 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:18:23.769678  408369 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:18:23.778956  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:18:23.804256  408369 start.go:296] duration metric: took 134.747899ms for postStartSetup
	I0127 13:18:23.804309  408369 fix.go:56] duration metric: took 6.372747835s for fixHost
	I0127 13:18:23.804351  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:23.807499  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.807864  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:23.807898  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.808080  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:23.808332  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:23.808527  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:23.808738  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:23.808922  408369 main.go:141] libmachine: Using SSH client type: native
	I0127 13:18:23.809122  408369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.99 22 <nil> <nil>}
	I0127 13:18:23.809141  408369 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:18:23.935491  408369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737983903.928732121
	
	I0127 13:18:23.935521  408369 fix.go:216] guest clock: 1737983903.928732121
	I0127 13:18:23.935531  408369 fix.go:229] Guest: 2025-01-27 13:18:23.928732121 +0000 UTC Remote: 2025-01-27 13:18:23.804326973 +0000 UTC m=+46.577529943 (delta=124.405148ms)
	I0127 13:18:23.935579  408369 fix.go:200] guest clock delta is within tolerance: 124.405148ms
	I0127 13:18:23.935588  408369 start.go:83] releasing machines lock for "pause-715621", held for 6.504068368s
	I0127 13:18:23.935617  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:18:23.935885  408369 main.go:141] libmachine: (pause-715621) Calling .GetIP
	I0127 13:18:23.938779  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.939131  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:23.939162  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.939281  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:18:23.939790  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:18:23.939971  408369 main.go:141] libmachine: (pause-715621) Calling .DriverName
	I0127 13:18:23.940081  408369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:18:23.940138  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:23.940212  408369 ssh_runner.go:195] Run: cat /version.json
	I0127 13:18:23.940239  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHHostname
	I0127 13:18:23.943026  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.943408  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:23.943442  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.943476  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.943628  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:23.943760  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:23.943890  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:23.943918  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:23.943926  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:23.944064  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHPort
	I0127 13:18:23.944069  408369 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/pause-715621/id_rsa Username:docker}
	I0127 13:18:23.944212  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHKeyPath
	I0127 13:18:23.944399  408369 main.go:141] libmachine: (pause-715621) Calling .GetSSHUsername
	I0127 13:18:23.944554  408369 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/pause-715621/id_rsa Username:docker}
	I0127 13:18:24.046514  408369 ssh_runner.go:195] Run: systemctl --version
	I0127 13:18:24.053231  408369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:18:24.210908  408369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:18:24.217124  408369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:18:24.217182  408369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:18:24.226398  408369 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 13:18:24.226427  408369 start.go:495] detecting cgroup driver to use...
	I0127 13:18:24.226493  408369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:18:24.246842  408369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:18:24.260036  408369 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:18:24.260094  408369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:18:24.273821  408369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:18:24.286842  408369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:18:24.419845  408369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:18:24.557710  408369 docker.go:233] disabling docker service ...
	I0127 13:18:24.557811  408369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:18:24.576024  408369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:18:24.591727  408369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:18:24.738571  408369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:18:24.877414  408369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:18:24.912522  408369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:18:24.937950  408369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:18:24.938036  408369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:18:24.949698  408369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:18:24.949792  408369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:18:24.964526  408369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:18:24.976632  408369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:18:24.990334  408369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:18:25.002707  408369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:18:25.015053  408369 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:18:25.025899  408369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:18:25.037456  408369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:18:25.047550  408369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:18:25.057453  408369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:18:25.178456  408369 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:18:26.038592  408369 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:18:26.038703  408369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:18:26.043437  408369 start.go:563] Will wait 60s for crictl version
	I0127 13:18:26.043501  408369 ssh_runner.go:195] Run: which crictl
	I0127 13:18:26.047225  408369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:18:26.085523  408369 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:18:26.085584  408369 ssh_runner.go:195] Run: crio --version
	I0127 13:18:26.114265  408369 ssh_runner.go:195] Run: crio --version
	I0127 13:18:26.145381  408369 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:18:26.146712  408369 main.go:141] libmachine: (pause-715621) Calling .GetIP
	I0127 13:18:26.149203  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:26.149535  408369 main.go:141] libmachine: (pause-715621) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a1:21", ip: ""} in network mk-pause-715621: {Iface:virbr1 ExpiryTime:2025-01-27 14:16:59 +0000 UTC Type:0 Mac:52:54:00:3f:a1:21 Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:pause-715621 Clientid:01:52:54:00:3f:a1:21}
	I0127 13:18:26.149558  408369 main.go:141] libmachine: (pause-715621) DBG | domain pause-715621 has defined IP address 192.168.39.99 and MAC address 52:54:00:3f:a1:21 in network mk-pause-715621
	I0127 13:18:26.149842  408369 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 13:18:26.157452  408369 kubeadm.go:883] updating cluster {Name:pause-715621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-715621 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:18:26.157647  408369 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:18:26.157718  408369 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:18:26.412310  408369 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:18:26.412351  408369 crio.go:433] Images already preloaded, skipping extraction
	I0127 13:18:26.412433  408369 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:18:26.555706  408369 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:18:26.555740  408369 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:18:26.555750  408369 kubeadm.go:934] updating node { 192.168.39.99 8443 v1.32.1 crio true true} ...
	I0127 13:18:26.555893  408369 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-715621 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-715621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:18:26.555994  408369 ssh_runner.go:195] Run: crio config
	I0127 13:18:26.769611  408369 cni.go:84] Creating CNI manager for ""
	I0127 13:18:26.769669  408369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:18:26.769699  408369 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:18:26.769735  408369 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.99 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-715621 NodeName:pause-715621 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:18:26.769947  408369 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-715621"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.99"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.99"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:18:26.770086  408369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:18:26.781853  408369 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:18:26.781934  408369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:18:26.792299  408369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0127 13:18:26.812431  408369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:18:26.839154  408369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0127 13:18:26.861297  408369 ssh_runner.go:195] Run: grep 192.168.39.99	control-plane.minikube.internal$ /etc/hosts
	I0127 13:18:26.866436  408369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:18:27.097554  408369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:18:27.206053  408369 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621 for IP: 192.168.39.99
	I0127 13:18:27.206090  408369 certs.go:194] generating shared ca certs ...
	I0127 13:18:27.206117  408369 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:27.206311  408369 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:18:27.206369  408369 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:18:27.206383  408369 certs.go:256] generating profile certs ...
	I0127 13:18:27.206490  408369 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/client.key
	I0127 13:18:27.222087  408369 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/apiserver.key.98616f01
	I0127 13:18:27.222207  408369 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/proxy-client.key
	I0127 13:18:27.222379  408369 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:18:27.222434  408369 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:18:27.222448  408369 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:18:27.222486  408369 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:18:27.222526  408369 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:18:27.222583  408369 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:18:27.222640  408369 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:18:27.223450  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:18:27.272199  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:18:27.348267  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:18:27.395213  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:18:27.440846  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0127 13:18:27.522825  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:18:27.591770  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:18:27.625135  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/pause-715621/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 13:18:27.661115  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:18:27.697370  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:18:27.729838  408369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:18:27.759292  408369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:18:27.778910  408369 ssh_runner.go:195] Run: openssl version
	I0127 13:18:27.788217  408369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:18:27.803194  408369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:18:27.807919  408369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:18:27.807977  408369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:18:27.815919  408369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:18:27.829735  408369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:18:27.842655  408369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:18:27.848402  408369 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:18:27.848459  408369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:18:27.855004  408369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:18:27.865508  408369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:18:27.878485  408369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:18:27.883404  408369 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:18:27.883453  408369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:18:27.891277  408369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:18:27.906598  408369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:18:27.913321  408369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:18:27.921252  408369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:18:27.930082  408369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:18:27.936318  408369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:18:27.947131  408369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:18:27.956934  408369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:18:27.965522  408369 kubeadm.go:392] StartCluster: {Name:pause-715621 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-715621 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:
false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:27.965693  408369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:18:27.965764  408369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:18:28.058516  408369 cri.go:89] found id: "912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603"
	I0127 13:18:28.058567  408369 cri.go:89] found id: "6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae"
	I0127 13:18:28.058573  408369 cri.go:89] found id: "42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c"
	I0127 13:18:28.058578  408369 cri.go:89] found id: "4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350"
	I0127 13:18:28.058582  408369 cri.go:89] found id: "e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8"
	I0127 13:18:28.058586  408369 cri.go:89] found id: "98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd"
	I0127 13:18:28.058590  408369 cri.go:89] found id: "6930b75d5bbf2a25fb553ad93f212a983d55e74519c2e848487f84c71bfa801f"
	I0127 13:18:28.058594  408369 cri.go:89] found id: "4fc9013851c3059a91015ac1396d6636eeae81d502f558d80ae951e12d334390"
	I0127 13:18:28.058598  408369 cri.go:89] found id: "744a587c8eb61b1f09c26048f0cf1634483ce40ebe052c114e12ed277e2a8fbe"
	I0127 13:18:28.058607  408369 cri.go:89] found id: "32e2c4c0771a188da5e21bac6394968b8ce509fcfbc01f1416c729f114c0dd81"
	I0127 13:18:28.058611  408369 cri.go:89] found id: "1a946a77c2bf975c64db2fbf19e33949c1f471f88854a0b16108265176ce28fd"
	I0127 13:18:28.058615  408369 cri.go:89] found id: ""
	I0127 13:18:28.058676  408369 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-715621 -n pause-715621
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-715621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-715621 logs -n 25: (1.425730073s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo cat              | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo cat              | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo find             | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo crio             | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-211629                       | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:16 UTC |
	| ssh     | -p NoKubernetes-392035 sudo            | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| start   | -p pause-715621 --memory=2048          | pause-715621              | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:17 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-392035                 | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p NoKubernetes-392035                 | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-413928              | running-upgrade-413928    | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p cert-expiration-180143              | cert-expiration-180143    | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-392035 sudo            | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-392035                 | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p force-systemd-flag-268206           | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-715621                        | pause-715621              | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:19 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-268206 ssh cat      | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-268206           | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	| start   | -p stopped-upgrade-619602              | minikube                  | jenkins | v1.26.0 | 27 Jan 25 13:18 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-511736           | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	| start   | -p kubernetes-upgrade-511736           | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:18:54
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:18:54.697885  409716 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:18:54.698002  409716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:18:54.698013  409716 out.go:358] Setting ErrFile to fd 2...
	I0127 13:18:54.698020  409716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:18:54.698218  409716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:18:54.698809  409716 out.go:352] Setting JSON to false
	I0127 13:18:54.699739  409716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21675,"bootTime":1737962260,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:18:54.699855  409716 start.go:139] virtualization: kvm guest
	I0127 13:18:54.701958  409716 out.go:177] * [kubernetes-upgrade-511736] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:18:54.703354  409716 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:18:54.703391  409716 notify.go:220] Checking for updates...
	I0127 13:18:54.705875  409716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:18:54.707220  409716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:18:54.708423  409716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:18:54.709585  409716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:18:54.710654  409716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:18:54.712180  409716 config.go:182] Loaded profile config "kubernetes-upgrade-511736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 13:18:54.712602  409716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:18:54.712680  409716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:18:54.728940  409716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0127 13:18:54.729344  409716 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:18:54.729848  409716 main.go:141] libmachine: Using API Version  1
	I0127 13:18:54.729869  409716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:18:54.730264  409716 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:18:54.730473  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:18:54.730743  409716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:18:54.731033  409716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:18:54.731079  409716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:18:54.745532  409716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0127 13:18:54.745961  409716 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:18:54.746445  409716 main.go:141] libmachine: Using API Version  1
	I0127 13:18:54.746469  409716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:18:54.746846  409716 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:18:54.746998  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:18:54.782249  409716 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:18:54.783571  409716 start.go:297] selected driver: kvm2
	I0127 13:18:54.783587  409716 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-up
grade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:54.783691  409716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:18:54.784443  409716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:18:54.784548  409716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:18:54.799531  409716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:18:54.799918  409716 cni.go:84] Creating CNI manager for ""
	I0127 13:18:54.799984  409716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:18:54.800030  409716 start.go:340] cluster config:
	{Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:54.800123  409716 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:18:54.801679  409716 out.go:177] * Starting "kubernetes-upgrade-511736" primary control-plane node in "kubernetes-upgrade-511736" cluster
	I0127 13:18:53.624992  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:18:53.625324  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | unable to find current IP address of domain stopped-upgrade-619602 in network mk-stopped-upgrade-619602
	I0127 13:18:53.625349  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | I0127 13:18:53.625277  409490 retry.go:31] will retry after 3.105187151s: waiting for domain to come up
	I0127 13:18:53.383834  408369 pod_ready.go:103] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:55.879664  408369 pod_ready.go:103] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:57.379576  408369 pod_ready.go:93] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:57.379602  408369 pod_ready.go:82] duration metric: took 11.005522s for pod "etcd-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:57.379616  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.385715  408369 pod_ready.go:93] pod "kube-apiserver-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.385746  408369 pod_ready.go:82] duration metric: took 1.006120493s for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.385760  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.391193  408369 pod_ready.go:93] pod "kube-controller-manager-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.391213  408369 pod_ready.go:82] duration metric: took 5.445222ms for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.391223  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.395665  408369 pod_ready.go:93] pod "kube-proxy-bmqwx" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.395686  408369 pod_ready.go:82] duration metric: took 4.456879ms for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.395695  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.401834  408369 pod_ready.go:93] pod "kube-scheduler-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.401851  408369 pod_ready.go:82] duration metric: took 6.150754ms for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.401858  408369 pod_ready.go:39] duration metric: took 12.036663963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:18:58.401875  408369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:18:58.415823  408369 ops.go:34] apiserver oom_adj: -16
	I0127 13:18:58.415843  408369 kubeadm.go:597] duration metric: took 30.265865595s to restartPrimaryControlPlane
	I0127 13:18:58.415851  408369 kubeadm.go:394] duration metric: took 30.450339394s to StartCluster
	I0127 13:18:58.415868  408369 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:58.415948  408369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:18:58.417213  408369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:58.417461  408369 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:18:58.417582  408369 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:18:58.417747  408369 config.go:182] Loaded profile config "pause-715621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:18:58.418896  408369 out.go:177] * Verifying Kubernetes components...
	I0127 13:18:58.418896  408369 out.go:177] * Enabled addons: 
	I0127 13:18:54.802765  409716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:18:54.802798  409716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:18:54.802805  409716 cache.go:56] Caching tarball of preloaded images
	I0127 13:18:54.802882  409716 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:18:54.802893  409716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:18:54.802976  409716 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/config.json ...
	I0127 13:18:54.803139  409716 start.go:360] acquireMachinesLock for kubernetes-upgrade-511736: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:18:56.731611  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:18:56.732195  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | unable to find current IP address of domain stopped-upgrade-619602 in network mk-stopped-upgrade-619602
	I0127 13:18:56.732219  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | I0127 13:18:56.732161  409490 retry.go:31] will retry after 3.834401708s: waiting for domain to come up
	I0127 13:19:00.567674  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.568073  409466 main.go:134] libmachine: (stopped-upgrade-619602) found domain IP: 192.168.83.52
	I0127 13:19:00.568088  409466 main.go:134] libmachine: (stopped-upgrade-619602) reserving static IP address...
	I0127 13:19:00.568104  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has current primary IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.568518  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | unable to find host DHCP lease matching {name: "stopped-upgrade-619602", mac: "52:54:00:88:84:48", ip: "192.168.83.52"} in network mk-stopped-upgrade-619602
	I0127 13:19:00.644275  409466 main.go:134] libmachine: (stopped-upgrade-619602) reserved static IP address 192.168.83.52 for domain stopped-upgrade-619602
	I0127 13:19:00.644297  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | Getting to WaitForSSH function...
	I0127 13:19:00.644307  409466 main.go:134] libmachine: (stopped-upgrade-619602) waiting for SSH...
	I0127 13:19:00.647436  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.647870  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:84:48", ip: ""} in network mk-stopped-upgrade-619602: {Iface:virbr3 ExpiryTime:2025-01-27 14:18:55 +0000 UTC Type:0 Mac:52:54:00:88:84:48 Iaid: IPaddr:192.168.83.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:84:48}
	I0127 13:19:00.647898  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.648037  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | Using SSH client type: external
	I0127 13:19:00.648060  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/stopped-upgrade-619602/id_rsa (-rw-------)
	I0127 13:19:00.648089  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/stopped-upgrade-619602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:19:00.648097  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | About to run SSH command:
	I0127 13:19:00.648109  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | exit 0
	I0127 13:19:00.741818  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | SSH cmd err, output: <nil>: 
	I0127 13:19:00.742060  409466 main.go:134] libmachine: (stopped-upgrade-619602) KVM machine creation complete
	I0127 13:19:00.742460  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetConfigRaw
	I0127 13:19:00.743020  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .DriverName
	I0127 13:19:00.743235  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .DriverName
	I0127 13:19:00.743370  409466 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 13:19:00.743382  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetState
	I0127 13:19:00.744638  409466 main.go:134] libmachine: Detecting operating system of created instance...
	I0127 13:19:00.744647  409466 main.go:134] libmachine: Waiting for SSH to be available...
	I0127 13:19:00.744653  409466 main.go:134] libmachine: Getting to WaitForSSH function...
	I0127 13:19:00.744663  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHHostname
	I0127 13:19:00.746907  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.747206  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:84:48", ip: ""} in network mk-stopped-upgrade-619602: {Iface:virbr3 ExpiryTime:2025-01-27 14:18:55 +0000 UTC Type:0 Mac:52:54:00:88:84:48 Iaid: IPaddr:192.168.83.52 Prefix:24 Hostname:stopped-upgrade-619602 Clientid:01:52:54:00:88:84:48}
	I0127 13:19:00.747229  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.747419  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHPort
	I0127 13:19:00.747585  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.747715  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.747810  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHUsername
	I0127 13:19:00.747944  409466 main.go:134] libmachine: Using SSH client type: native
	I0127 13:19:00.748118  409466 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 192.168.83.52 22 <nil> <nil>}
	I0127 13:19:00.748128  409466 main.go:134] libmachine: About to run SSH command:
	exit 0
	I0127 13:19:00.873255  409466 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:19:00.873267  409466 main.go:134] libmachine: Detecting the provisioner...
	I0127 13:19:00.873274  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHHostname
	I0127 13:19:00.875992  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.876387  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:84:48", ip: ""} in network mk-stopped-upgrade-619602: {Iface:virbr3 ExpiryTime:2025-01-27 14:18:55 +0000 UTC Type:0 Mac:52:54:00:88:84:48 Iaid: IPaddr:192.168.83.52 Prefix:24 Hostname:stopped-upgrade-619602 Clientid:01:52:54:00:88:84:48}
	I0127 13:19:00.876408  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.876533  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHPort
	I0127 13:19:00.876712  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.876888  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.877033  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHUsername
	I0127 13:19:00.877188  409466 main.go:134] libmachine: Using SSH client type: native
	I0127 13:19:00.877309  409466 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 192.168.83.52 22 <nil> <nil>}
	I0127 13:19:00.877315  409466 main.go:134] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 13:18:58.420092  408369 addons.go:514] duration metric: took 2.516323ms for enable addons: enabled=[]
	I0127 13:18:58.420125  408369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:18:58.563821  408369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:18:58.581421  408369 node_ready.go:35] waiting up to 6m0s for node "pause-715621" to be "Ready" ...
	I0127 13:18:58.584442  408369 node_ready.go:49] node "pause-715621" has status "Ready":"True"
	I0127 13:18:58.584464  408369 node_ready.go:38] duration metric: took 3.003779ms for node "pause-715621" to be "Ready" ...
	I0127 13:18:58.584477  408369 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:18:58.590435  408369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-648ck" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.977989  408369 pod_ready.go:93] pod "coredns-668d6bf9bc-648ck" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.978018  408369 pod_ready.go:82] duration metric: took 387.562268ms for pod "coredns-668d6bf9bc-648ck" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.978030  408369 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.378401  408369 pod_ready.go:93] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:59.378431  408369 pod_ready.go:82] duration metric: took 400.394354ms for pod "etcd-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.378443  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.778435  408369 pod_ready.go:93] pod "kube-apiserver-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:59.778458  408369 pod_ready.go:82] duration metric: took 400.006887ms for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.778468  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.178105  408369 pod_ready.go:93] pod "kube-controller-manager-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:19:00.178137  408369 pod_ready.go:82] duration metric: took 399.660963ms for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.178148  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.578377  408369 pod_ready.go:93] pod "kube-proxy-bmqwx" in "kube-system" namespace has status "Ready":"True"
	I0127 13:19:00.578403  408369 pod_ready.go:82] duration metric: took 400.247896ms for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.578416  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.978461  408369 pod_ready.go:93] pod "kube-scheduler-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:19:00.978492  408369 pod_ready.go:82] duration metric: took 400.06632ms for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.978505  408369 pod_ready.go:39] duration metric: took 2.394015491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:19:00.978525  408369 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:19:00.978614  408369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:19:00.996449  408369 api_server.go:72] duration metric: took 2.578949849s to wait for apiserver process to appear ...
	I0127 13:19:00.996479  408369 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:19:00.996503  408369 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I0127 13:19:01.002660  408369 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I0127 13:19:01.003911  408369 api_server.go:141] control plane version: v1.32.1
	I0127 13:19:01.003939  408369 api_server.go:131] duration metric: took 7.450455ms to wait for apiserver health ...
	I0127 13:19:01.003949  408369 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:19:01.182903  408369 system_pods.go:59] 6 kube-system pods found
	I0127 13:19:01.182931  408369 system_pods.go:61] "coredns-668d6bf9bc-648ck" [565bb444-713e-4588-b9c5-7fca2461c011] Running
	I0127 13:19:01.182935  408369 system_pods.go:61] "etcd-pause-715621" [40996295-2939-4b9c-a962-3b72e1005238] Running
	I0127 13:19:01.182945  408369 system_pods.go:61] "kube-apiserver-pause-715621" [cd262431-be72-4e4c-9734-6f5df33b8801] Running
	I0127 13:19:01.182949  408369 system_pods.go:61] "kube-controller-manager-pause-715621" [e43a90ef-cbfa-4da4-af4f-90c73116f0bd] Running
	I0127 13:19:01.182955  408369 system_pods.go:61] "kube-proxy-bmqwx" [39735320-ee7a-421a-8a86-d98f13ed5917] Running
	I0127 13:19:01.182958  408369 system_pods.go:61] "kube-scheduler-pause-715621" [89abe876-3e9e-495b-9315-95f7b7660b49] Running
	I0127 13:19:01.182963  408369 system_pods.go:74] duration metric: took 179.008588ms to wait for pod list to return data ...
	I0127 13:19:01.182970  408369 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:19:01.378592  408369 default_sa.go:45] found service account: "default"
	I0127 13:19:01.378619  408369 default_sa.go:55] duration metric: took 195.64227ms for default service account to be created ...
	I0127 13:19:01.378629  408369 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:19:01.580474  408369 system_pods.go:87] 6 kube-system pods found
	I0127 13:19:01.778656  408369 system_pods.go:105] "coredns-668d6bf9bc-648ck" [565bb444-713e-4588-b9c5-7fca2461c011] Running
	I0127 13:19:01.778678  408369 system_pods.go:105] "etcd-pause-715621" [40996295-2939-4b9c-a962-3b72e1005238] Running
	I0127 13:19:01.778683  408369 system_pods.go:105] "kube-apiserver-pause-715621" [cd262431-be72-4e4c-9734-6f5df33b8801] Running
	I0127 13:19:01.778688  408369 system_pods.go:105] "kube-controller-manager-pause-715621" [e43a90ef-cbfa-4da4-af4f-90c73116f0bd] Running
	I0127 13:19:01.778694  408369 system_pods.go:105] "kube-proxy-bmqwx" [39735320-ee7a-421a-8a86-d98f13ed5917] Running
	I0127 13:19:01.778699  408369 system_pods.go:105] "kube-scheduler-pause-715621" [89abe876-3e9e-495b-9315-95f7b7660b49] Running
	I0127 13:19:01.778708  408369 system_pods.go:147] duration metric: took 400.071629ms to wait for k8s-apps to be running ...
	I0127 13:19:01.778718  408369 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 13:19:01.778773  408369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:19:01.795632  408369 system_svc.go:56] duration metric: took 16.900901ms WaitForService to wait for kubelet
	I0127 13:19:01.795664  408369 kubeadm.go:582] duration metric: took 3.378170986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:19:01.795696  408369 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:19:01.978145  408369 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:19:01.978179  408369 node_conditions.go:123] node cpu capacity is 2
	I0127 13:19:01.978194  408369 node_conditions.go:105] duration metric: took 182.492916ms to run NodePressure ...
	I0127 13:19:01.978211  408369 start.go:241] waiting for startup goroutines ...
	I0127 13:19:01.978220  408369 start.go:246] waiting for cluster config update ...
	I0127 13:19:01.978230  408369 start.go:255] writing updated cluster config ...
	I0127 13:19:01.978626  408369 ssh_runner.go:195] Run: rm -f paused
	I0127 13:19:02.029743  408369 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:19:02.032770  408369 out.go:177] * Done! kubectl is now configured to use "pause-715621" cluster and "default" namespace by default
	I0127 13:19:02.231025  409716 start.go:364] duration metric: took 7.427839593s to acquireMachinesLock for "kubernetes-upgrade-511736"
	I0127 13:19:02.231071  409716 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:19:02.231077  409716 fix.go:54] fixHost starting: 
	I0127 13:19:02.231424  409716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:19:02.231470  409716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:19:02.249158  409716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0127 13:19:02.249555  409716 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:19:02.250116  409716 main.go:141] libmachine: Using API Version  1
	I0127 13:19:02.250143  409716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:19:02.250527  409716 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:19:02.250788  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:19:02.250940  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetState
	I0127 13:19:02.252481  409716 fix.go:112] recreateIfNeeded on kubernetes-upgrade-511736: state=Stopped err=<nil>
	I0127 13:19:02.252520  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	W0127 13:19:02.252771  409716 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:19:02.254991  409716 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-511736" ...
	
	
	==> CRI-O <==
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.732233770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983942732202326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74e465bd-d77f-4e14-9ed5-25310f6c239d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.733011179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c3b3cde-109f-44d5-956c-0e23233491a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.733285341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c3b3cde-109f-44d5-956c-0e23233491a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.733797936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c3b3cde-109f-44d5-956c-0e23233491a9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.780401184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03f73441-fa1b-4d9c-b95a-dbed5790396f name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.780502587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03f73441-fa1b-4d9c-b95a-dbed5790396f name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.781872208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67faf645-785d-47a2-a00a-1976c847779e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.782637005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983942782606650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67faf645-785d-47a2-a00a-1976c847779e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.783403988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=230f653d-ab62-4a28-8e93-90b23ae39ee1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.783477895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=230f653d-ab62-4a28-8e93-90b23ae39ee1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.783839543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=230f653d-ab62-4a28-8e93-90b23ae39ee1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.834377124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a944cbcf-eba4-45a6-b73b-9c298385a6cf name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.834485207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a944cbcf-eba4-45a6-b73b-9c298385a6cf name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.836507996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e2d8b24-1514-4685-aced-cbd691bc2568 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.837016474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983942836985968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e2d8b24-1514-4685-aced-cbd691bc2568 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.837825704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc3bbb46-85c2-4451-9bf5-543782b82eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.837896167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc3bbb46-85c2-4451-9bf5-543782b82eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.838332134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc3bbb46-85c2-4451-9bf5-543782b82eb1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.883940384Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a081ee64-abe8-4675-bdd2-d111af7bbae4 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.884034509Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a081ee64-abe8-4675-bdd2-d111af7bbae4 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.885803574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=139182cf-8500-4c5e-be36-504ec2892f71 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.886386413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983942886357701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=139182cf-8500-4c5e-be36-504ec2892f71 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.887205095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7200e048-3ecd-4a7a-acc0-968a713ebfc7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.887280553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7200e048-3ecd-4a7a-acc0-968a713ebfc7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:02 pause-715621 crio[2136]: time="2025-01-27 13:19:02.887862830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7200e048-3ecd-4a7a-acc0-968a713ebfc7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	86bda3174ef03       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   17 seconds ago       Running             kube-proxy                2                   77ae7a20fd48d       kube-proxy-bmqwx
	4468b121d501a       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   21 seconds ago       Running             kube-apiserver            2                   6691300ff75b6       kube-apiserver-pause-715621
	60e4b02238e77       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   21 seconds ago       Running             kube-scheduler            2                   fa8eceda5d867       kube-scheduler-pause-715621
	4822d278a99ce       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   21 seconds ago       Running             kube-controller-manager   2                   e81ca1ab8afe7       kube-controller-manager-pause-715621
	fc4967cc52bb3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   21 seconds ago       Running             etcd                      2                   dbe870279382a       etcd-pause-715621
	6434e3d8c8aae       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   34 seconds ago       Running             coredns                   1                   a895a94cb4200       coredns-668d6bf9bc-648ck
	912dc0d6d28ff       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   35 seconds ago       Exited              etcd                      1                   dbe870279382a       etcd-pause-715621
	6a4fbb9b1241f       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   36 seconds ago       Exited              kube-apiserver            1                   6691300ff75b6       kube-apiserver-pause-715621
	42775a1d63625       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   36 seconds ago       Exited              kube-controller-manager   1                   e81ca1ab8afe7       kube-controller-manager-pause-715621
	4ad4a5e410ea5       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   36 seconds ago       Exited              kube-scheduler            1                   fa8eceda5d867       kube-scheduler-pause-715621
	e8162dbee2c8f       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   36 seconds ago       Exited              kube-proxy                1                   77ae7a20fd48d       kube-proxy-bmqwx
	98bde6bc2da8d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   f92ad0817295d       coredns-668d6bf9bc-648ck
	
	
	==> coredns [6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be] <==
	Trace[1449462543]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:18:38.400)
	Trace[1449462543]: [10.000892374s] [10.000892374s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1099040913]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:18:28.399) (total time: 10001ms):
	Trace[1099040913]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:18:38.401)
	Trace[1099040913]: [10.001587277s] [10.001587277s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[515246114]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:18:28.400) (total time: 10001ms):
	Trace[515246114]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:18:38.401)
	Trace[515246114]: [10.001401576s] [10.001401576s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56698 - 46092 "HINFO IN 7416994728805226055.3440515219704200329. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011399899s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[674948108]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:17:30.568) (total time: 30001ms):
	Trace[674948108]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (13:18:00.569)
	Trace[674948108]: [30.001984144s] [30.001984144s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[493040010]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:17:30.568) (total time: 30002ms):
	Trace[493040010]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:18:00.570)
	Trace[493040010]: [30.002500369s] [30.002500369s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1262238197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:17:30.569) (total time: 30002ms):
	Trace[1262238197]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:18:00.571)
	Trace[1262238197]: [30.00299132s] [30.00299132s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-715621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-715621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=pause-715621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_17_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:17:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-715621
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:18:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    pause-715621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2fb1984d96c4489ca25d9eb482561209
	  System UUID:                2fb1984d-96c4-489c-a25d-9eb482561209
	  Boot ID:                    9a7cd267-4e10-4a9f-82b7-67bcb0a47f02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-648ck                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     94s
	  kube-system                 etcd-pause-715621                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         99s
	  kube-system                 kube-apiserver-pause-715621             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-715621    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-bmqwx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-pause-715621             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node pause-715621 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node pause-715621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node pause-715621 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  99s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                98s                  kubelet          Node pause-715621 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node pause-715621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node pause-715621 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node pause-715621 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           95s                  node-controller  Node pause-715621 event: Registered Node pause-715621 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-715621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-715621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-715621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                  node-controller  Node pause-715621 event: Registered Node pause-715621 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.964423] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.073554] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061853] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.235911] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.164735] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.354462] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.848865] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +0.060986] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.419301] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +1.638357] kauditd_printk_skb: 92 callbacks suppressed
	[  +4.937207] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +4.428482] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.674705] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 13:18] kauditd_printk_skb: 50 callbacks suppressed
	[ +22.214499] systemd-fstab-generator[2056]: Ignoring "noauto" option for root device
	[  +0.130856] systemd-fstab-generator[2068]: Ignoring "noauto" option for root device
	[  +0.190642] systemd-fstab-generator[2082]: Ignoring "noauto" option for root device
	[  +0.131607] systemd-fstab-generator[2094]: Ignoring "noauto" option for root device
	[  +0.311347] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[  +1.854702] systemd-fstab-generator[2548]: Ignoring "noauto" option for root device
	[  +2.527703] kauditd_printk_skb: 195 callbacks suppressed
	[ +11.741538] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +4.595889] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.645363] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	
	
	==> etcd [912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603] <==
	{"level":"info","ts":"2025-01-27T13:18:27.715979Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-01-27T13:18:27.747521Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","commit-index":443}
	{"level":"info","ts":"2025-01-27T13:18:27.747782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 switched to configuration voters=()"}
	{"level":"info","ts":"2025-01-27T13:18:27.747840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became follower at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:27.748102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3b7a74ffda0d9c54 [peers: [], term: 2, commit: 443, applied: 0, lastindex: 443, lastterm: 2]"}
	{"level":"warn","ts":"2025-01-27T13:18:27.754300Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-01-27T13:18:27.799527Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":421}
	{"level":"info","ts":"2025-01-27T13:18:27.816611Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-01-27T13:18:27.826355Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3b7a74ffda0d9c54","timeout":"7s"}
	{"level":"info","ts":"2025-01-27T13:18:27.826726Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3b7a74ffda0d9c54"}
	{"level":"info","ts":"2025-01-27T13:18:27.828258Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"3b7a74ffda0d9c54","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T13:18:27.828741Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T13:18:27.828941Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T13:18:27.829028Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T13:18:27.829043Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T13:18:27.829314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 switched to configuration voters=(4285866637620255828)"}
	{"level":"info","ts":"2025-01-27T13:18:27.829377Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","added-peer-id":"3b7a74ffda0d9c54","added-peer-peer-urls":["https://192.168.39.99:2380"]}
	{"level":"info","ts":"2025-01-27T13:18:27.829492Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:27.829554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:27.832055Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:18:27.843966Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T13:18:27.845430Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"3b7a74ffda0d9c54","initial-advertise-peer-urls":["https://192.168.39.99:2380"],"listen-peer-urls":["https://192.168.39.99:2380"],"advertise-client-urls":["https://192.168.39.99:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.99:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T13:18:27.847170Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T13:18:27.847317Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2025-01-27T13:18:27.847344Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.99:2380"}
	
	
	==> etcd [fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f] <==
	{"level":"info","ts":"2025-01-27T13:18:42.343575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 switched to configuration voters=(4285866637620255828)"}
	{"level":"info","ts":"2025-01-27T13:18:42.346477Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","added-peer-id":"3b7a74ffda0d9c54","added-peer-peer-urls":["https://192.168.39.99:2380"]}
	{"level":"info","ts":"2025-01-27T13:18:42.346619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:42.346780Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:42.350821Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T13:18:42.351205Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2025-01-27T13:18:42.351247Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2025-01-27T13:18:42.351354Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"3b7a74ffda0d9c54","initial-advertise-peer-urls":["https://192.168.39.99:2380"],"listen-peer-urls":["https://192.168.39.99:2380"],"advertise-client-urls":["https://192.168.39.99:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.99:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T13:18:42.351787Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T13:18:43.309997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:43.310061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:43.310109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 received MsgPreVoteResp from 3b7a74ffda0d9c54 at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:43.310179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.310190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 received MsgVoteResp from 3b7a74ffda0d9c54 at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.310204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.310215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3b7a74ffda0d9c54 elected leader 3b7a74ffda0d9c54 at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.316696Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"3b7a74ffda0d9c54","local-member-attributes":"{Name:pause-715621 ClientURLs:[https://192.168.39.99:2379]}","request-path":"/0/members/3b7a74ffda0d9c54/attributes","cluster-id":"ec756db12d8761b4","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T13:18:43.316755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:18:43.317316Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:18:43.317910Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:18:43.318700Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T13:18:43.319523Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:18:43.322217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.99:2379"}
	{"level":"info","ts":"2025-01-27T13:18:43.322306Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T13:18:43.322342Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:19:03 up 2 min,  0 users,  load average: 2.12, 0.71, 0.26
	Linux pause-715621 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb] <==
	I0127 13:18:44.751087       1 aggregator.go:171] initial CRD sync complete...
	I0127 13:18:44.751187       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 13:18:44.751215       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 13:18:44.773299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 13:18:44.784940       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 13:18:44.785092       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 13:18:44.786480       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 13:18:44.786562       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 13:18:44.786569       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 13:18:44.788715       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 13:18:44.793536       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 13:18:44.793980       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 13:18:44.809088       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 13:18:44.809110       1 policy_source.go:240] refreshing policies
	I0127 13:18:44.829411       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 13:18:44.863179       1 cache.go:39] Caches are synced for autoregister controller
	I0127 13:18:45.522325       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 13:18:45.592914       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 13:18:46.193318       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 13:18:46.239536       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 13:18:46.267238       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 13:18:46.273551       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 13:18:48.204546       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 13:18:48.264641       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 13:18:53.335198       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae] <==
	W0127 13:18:27.454859       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 13:18:27.455401       1 options.go:238] external host was not specified, using 192.168.39.99
	I0127 13:18:27.465337       1 server.go:143] Version: v1.32.1
	I0127 13:18:27.465424       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:28.713327       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0127 13:18:28.716097       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:28.720674       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0127 13:18:28.733453       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 13:18:28.740759       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 13:18:28.740913       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 13:18:28.741118       1 instance.go:233] Using reconciler: lease
	W0127 13:18:28.742298       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:29.719896       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:29.732391       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:29.743070       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:31.047743       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:31.110439       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:31.534687       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:33.230500       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:33.287550       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:34.074342       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:37.466609       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:37.644802       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c] <==
	I0127 13:18:28.619987       1 serving.go:386] Generated self-signed cert in-memory
	I0127 13:18:28.940685       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 13:18:28.940771       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:28.942708       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 13:18:28.942846       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 13:18:28.943378       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 13:18:28.943470       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d] <==
	I0127 13:18:47.937573       1 shared_informer.go:320] Caches are synced for HPA
	I0127 13:18:47.937583       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 13:18:47.938104       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 13:18:47.939035       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 13:18:47.939250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 13:18:47.939314       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 13:18:47.939468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.738µs"
	I0127 13:18:47.939794       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 13:18:47.945335       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 13:18:47.952758       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 13:18:47.954201       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 13:18:47.955416       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 13:18:47.957715       1 shared_informer.go:320] Caches are synced for TTL
	I0127 13:18:47.963417       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 13:18:47.963488       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 13:18:47.963496       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 13:18:47.964100       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 13:18:47.966806       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 13:18:47.971703       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 13:18:47.977214       1 shared_informer.go:320] Caches are synced for service account
	I0127 13:18:47.987518       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 13:18:47.990961       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 13:18:53.344398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="40.215239ms"
	I0127 13:18:53.364853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.292216ms"
	I0127 13:18:53.366892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="103.32µs"
	
	
	==> kube-proxy [86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:18:45.933276       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:18:45.945519       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	E0127 13:18:45.945648       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:18:45.978722       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:18:45.978763       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:18:45.978784       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:18:45.981758       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:18:45.982038       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:18:45.982068       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:45.983830       1 config.go:199] "Starting service config controller"
	I0127 13:18:45.983882       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:18:45.983905       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:18:45.983909       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:18:45.984499       1 config.go:329] "Starting node config controller"
	I0127 13:18:45.984529       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:18:46.084069       1 shared_informer.go:320] Caches are synced for service config
	I0127 13:18:46.084108       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 13:18:46.084701       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8] <==
	I0127 13:18:28.144528       1 server_linux.go:66] "Using iptables proxy"
	E0127 13:18:28.547441       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:18:28.698285       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:18:39.517215       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-715621\": dial tcp 192.168.39.99:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.99:45130->192.168.39.99:8443: read: connection reset by peer"
	
	
	==> kube-scheduler [4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350] <==
	I0127 13:18:28.878426       1 serving.go:386] Generated self-signed cert in-memory
	W0127 13:18:39.514872       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.99:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.99:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.99:45160->192.168.39.99:8443: read: connection reset by peer
	W0127 13:18:39.514907       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 13:18:39.514918       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 13:18:39.532896       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 13:18:39.533530       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0127 13:18:39.533604       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0127 13:18:39.537209       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 13:18:39.537295       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 13:18:39.537346       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0127 13:18:39.537939       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 13:18:39.538060       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 13:18:39.538295       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 13:18:39.538389       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 13:18:39.538432       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0127 13:18:39.539477       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E0127 13:18:39.540645       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491] <==
	I0127 13:18:43.000783       1 serving.go:386] Generated self-signed cert in-memory
	W0127 13:18:44.648086       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 13:18:44.648231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 13:18:44.648317       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 13:18:44.648341       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 13:18:44.724733       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 13:18:44.727228       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:44.733849       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 13:18:44.736229       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 13:18:44.737291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 13:18:44.737397       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 13:18:44.836637       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.621806    3079 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-715621\" not found" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.621912    3079 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-715621\" not found" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.663035    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.825928    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-715621\" already exists" pod="kube-system/kube-apiserver-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.826060    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.839902    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-715621\" already exists" pod="kube-system/kube-controller-manager-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.840026    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.849271    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-715621\" already exists" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.849374    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.852736    3079 kubelet_node_status.go:125] "Node was previously registered" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.852870    3079 kubelet_node_status.go:79] "Successfully registered node" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.852977    3079 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.854276    3079 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.862193    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-715621\" already exists" pod="kube-system/etcd-pause-715621"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.452689    3079 apiserver.go:52] "Watching apiserver"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.465885    3079 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.512069    3079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39735320-ee7a-421a-8a86-d98f13ed5917-xtables-lock\") pod \"kube-proxy-bmqwx\" (UID: \"39735320-ee7a-421a-8a86-d98f13ed5917\") " pod="kube-system/kube-proxy-bmqwx"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.512297    3079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39735320-ee7a-421a-8a86-d98f13ed5917-lib-modules\") pod \"kube-proxy-bmqwx\" (UID: \"39735320-ee7a-421a-8a86-d98f13ed5917\") " pod="kube-system/kube-proxy-bmqwx"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.626106    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: E0127 13:18:45.638864    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-715621\" already exists" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.761299    3079 scope.go:117] "RemoveContainer" containerID="e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8"
	Jan 27 13:18:51 pause-715621 kubelet[3079]: E0127 13:18:51.591825    3079 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983931591109193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:18:51 pause-715621 kubelet[3079]: E0127 13:18:51.592377    3079 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983931591109193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:19:01 pause-715621 kubelet[3079]: E0127 13:19:01.593700    3079 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983941593460909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:19:01 pause-715621 kubelet[3079]: E0127 13:19:01.593750    3079 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983941593460909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-715621 -n pause-715621
helpers_test.go:261: (dbg) Run:  kubectl --context pause-715621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-715621 -n pause-715621
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-715621 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-715621 logs -n 25: (1.43933285s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo cat              | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo cat              | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo                  | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo find             | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-211629 sudo crio             | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-211629                       | cilium-211629             | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:16 UTC |
	| ssh     | -p NoKubernetes-392035 sudo            | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| start   | -p pause-715621 --memory=2048          | pause-715621              | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:17 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-392035                 | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:16 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p NoKubernetes-392035                 | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-413928              | running-upgrade-413928    | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p cert-expiration-180143              | cert-expiration-180143    | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-392035 sudo            | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-392035                 | NoKubernetes-392035       | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:17 UTC |
	| start   | -p force-systemd-flag-268206           | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:18 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-715621                        | pause-715621              | jenkins | v1.35.0 | 27 Jan 25 13:17 UTC | 27 Jan 25 13:19 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-268206 ssh cat      | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-268206           | force-systemd-flag-268206 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	| start   | -p stopped-upgrade-619602              | minikube                  | jenkins | v1.26.0 | 27 Jan 25 13:18 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-511736           | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC | 27 Jan 25 13:18 UTC |
	| start   | -p kubernetes-upgrade-511736           | kubernetes-upgrade-511736 | jenkins | v1.35.0 | 27 Jan 25 13:18 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:18:54
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:18:54.697885  409716 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:18:54.698002  409716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:18:54.698013  409716 out.go:358] Setting ErrFile to fd 2...
	I0127 13:18:54.698020  409716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:18:54.698218  409716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:18:54.698809  409716 out.go:352] Setting JSON to false
	I0127 13:18:54.699739  409716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21675,"bootTime":1737962260,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:18:54.699855  409716 start.go:139] virtualization: kvm guest
	I0127 13:18:54.701958  409716 out.go:177] * [kubernetes-upgrade-511736] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:18:54.703354  409716 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:18:54.703391  409716 notify.go:220] Checking for updates...
	I0127 13:18:54.705875  409716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:18:54.707220  409716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:18:54.708423  409716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:18:54.709585  409716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:18:54.710654  409716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:18:54.712180  409716 config.go:182] Loaded profile config "kubernetes-upgrade-511736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 13:18:54.712602  409716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:18:54.712680  409716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:18:54.728940  409716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0127 13:18:54.729344  409716 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:18:54.729848  409716 main.go:141] libmachine: Using API Version  1
	I0127 13:18:54.729869  409716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:18:54.730264  409716 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:18:54.730473  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:18:54.730743  409716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:18:54.731033  409716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:18:54.731079  409716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:18:54.745532  409716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43521
	I0127 13:18:54.745961  409716 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:18:54.746445  409716 main.go:141] libmachine: Using API Version  1
	I0127 13:18:54.746469  409716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:18:54.746846  409716 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:18:54.746998  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:18:54.782249  409716 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:18:54.783571  409716 start.go:297] selected driver: kvm2
	I0127 13:18:54.783587  409716 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-up
grade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:54.783691  409716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:18:54.784443  409716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:18:54.784548  409716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:18:54.799531  409716 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:18:54.799918  409716 cni.go:84] Creating CNI manager for ""
	I0127 13:18:54.799984  409716 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:18:54.800030  409716 start.go:340] cluster config:
	{Name:kubernetes-upgrade-511736 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-511736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.10 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:18:54.800123  409716 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:18:54.801679  409716 out.go:177] * Starting "kubernetes-upgrade-511736" primary control-plane node in "kubernetes-upgrade-511736" cluster
	I0127 13:18:53.624992  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:18:53.625324  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | unable to find current IP address of domain stopped-upgrade-619602 in network mk-stopped-upgrade-619602
	I0127 13:18:53.625349  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | I0127 13:18:53.625277  409490 retry.go:31] will retry after 3.105187151s: waiting for domain to come up
	I0127 13:18:53.383834  408369 pod_ready.go:103] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:55.879664  408369 pod_ready.go:103] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"False"
	I0127 13:18:57.379576  408369 pod_ready.go:93] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:57.379602  408369 pod_ready.go:82] duration metric: took 11.005522s for pod "etcd-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:57.379616  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.385715  408369 pod_ready.go:93] pod "kube-apiserver-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.385746  408369 pod_ready.go:82] duration metric: took 1.006120493s for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.385760  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.391193  408369 pod_ready.go:93] pod "kube-controller-manager-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.391213  408369 pod_ready.go:82] duration metric: took 5.445222ms for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.391223  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.395665  408369 pod_ready.go:93] pod "kube-proxy-bmqwx" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.395686  408369 pod_ready.go:82] duration metric: took 4.456879ms for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.395695  408369 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.401834  408369 pod_ready.go:93] pod "kube-scheduler-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.401851  408369 pod_ready.go:82] duration metric: took 6.150754ms for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.401858  408369 pod_ready.go:39] duration metric: took 12.036663963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:18:58.401875  408369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:18:58.415823  408369 ops.go:34] apiserver oom_adj: -16
	I0127 13:18:58.415843  408369 kubeadm.go:597] duration metric: took 30.265865595s to restartPrimaryControlPlane
	I0127 13:18:58.415851  408369 kubeadm.go:394] duration metric: took 30.450339394s to StartCluster
	I0127 13:18:58.415868  408369 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:58.415948  408369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:18:58.417213  408369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:18:58.417461  408369 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:18:58.417582  408369 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:18:58.417747  408369 config.go:182] Loaded profile config "pause-715621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:18:58.418896  408369 out.go:177] * Verifying Kubernetes components...
	I0127 13:18:58.418896  408369 out.go:177] * Enabled addons: 
	I0127 13:18:54.802765  409716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:18:54.802798  409716 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:18:54.802805  409716 cache.go:56] Caching tarball of preloaded images
	I0127 13:18:54.802882  409716 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:18:54.802893  409716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:18:54.802976  409716 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kubernetes-upgrade-511736/config.json ...
	I0127 13:18:54.803139  409716 start.go:360] acquireMachinesLock for kubernetes-upgrade-511736: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:18:56.731611  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:18:56.732195  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | unable to find current IP address of domain stopped-upgrade-619602 in network mk-stopped-upgrade-619602
	I0127 13:18:56.732219  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | I0127 13:18:56.732161  409490 retry.go:31] will retry after 3.834401708s: waiting for domain to come up
	I0127 13:19:00.567674  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.568073  409466 main.go:134] libmachine: (stopped-upgrade-619602) found domain IP: 192.168.83.52
	I0127 13:19:00.568088  409466 main.go:134] libmachine: (stopped-upgrade-619602) reserving static IP address...
	I0127 13:19:00.568104  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has current primary IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.568518  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | unable to find host DHCP lease matching {name: "stopped-upgrade-619602", mac: "52:54:00:88:84:48", ip: "192.168.83.52"} in network mk-stopped-upgrade-619602
	I0127 13:19:00.644275  409466 main.go:134] libmachine: (stopped-upgrade-619602) reserved static IP address 192.168.83.52 for domain stopped-upgrade-619602
	I0127 13:19:00.644297  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | Getting to WaitForSSH function...
	I0127 13:19:00.644307  409466 main.go:134] libmachine: (stopped-upgrade-619602) waiting for SSH...
	I0127 13:19:00.647436  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.647870  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:84:48", ip: ""} in network mk-stopped-upgrade-619602: {Iface:virbr3 ExpiryTime:2025-01-27 14:18:55 +0000 UTC Type:0 Mac:52:54:00:88:84:48 Iaid: IPaddr:192.168.83.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:88:84:48}
	I0127 13:19:00.647898  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.648037  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | Using SSH client type: external
	I0127 13:19:00.648060  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/stopped-upgrade-619602/id_rsa (-rw-------)
	I0127 13:19:00.648089  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/stopped-upgrade-619602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:19:00.648097  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | About to run SSH command:
	I0127 13:19:00.648109  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | exit 0
	I0127 13:19:00.741818  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | SSH cmd err, output: <nil>: 
	I0127 13:19:00.742060  409466 main.go:134] libmachine: (stopped-upgrade-619602) KVM machine creation complete
	I0127 13:19:00.742460  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetConfigRaw
	I0127 13:19:00.743020  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .DriverName
	I0127 13:19:00.743235  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .DriverName
	I0127 13:19:00.743370  409466 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 13:19:00.743382  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetState
	I0127 13:19:00.744638  409466 main.go:134] libmachine: Detecting operating system of created instance...
	I0127 13:19:00.744647  409466 main.go:134] libmachine: Waiting for SSH to be available...
	I0127 13:19:00.744653  409466 main.go:134] libmachine: Getting to WaitForSSH function...
	I0127 13:19:00.744663  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHHostname
	I0127 13:19:00.746907  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.747206  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:84:48", ip: ""} in network mk-stopped-upgrade-619602: {Iface:virbr3 ExpiryTime:2025-01-27 14:18:55 +0000 UTC Type:0 Mac:52:54:00:88:84:48 Iaid: IPaddr:192.168.83.52 Prefix:24 Hostname:stopped-upgrade-619602 Clientid:01:52:54:00:88:84:48}
	I0127 13:19:00.747229  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.747419  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHPort
	I0127 13:19:00.747585  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.747715  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.747810  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHUsername
	I0127 13:19:00.747944  409466 main.go:134] libmachine: Using SSH client type: native
	I0127 13:19:00.748118  409466 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 192.168.83.52 22 <nil> <nil>}
	I0127 13:19:00.748128  409466 main.go:134] libmachine: About to run SSH command:
	exit 0
	I0127 13:19:00.873255  409466 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:19:00.873267  409466 main.go:134] libmachine: Detecting the provisioner...
	I0127 13:19:00.873274  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHHostname
	I0127 13:19:00.875992  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.876387  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:84:48", ip: ""} in network mk-stopped-upgrade-619602: {Iface:virbr3 ExpiryTime:2025-01-27 14:18:55 +0000 UTC Type:0 Mac:52:54:00:88:84:48 Iaid: IPaddr:192.168.83.52 Prefix:24 Hostname:stopped-upgrade-619602 Clientid:01:52:54:00:88:84:48}
	I0127 13:19:00.876408  409466 main.go:134] libmachine: (stopped-upgrade-619602) DBG | domain stopped-upgrade-619602 has defined IP address 192.168.83.52 and MAC address 52:54:00:88:84:48 in network mk-stopped-upgrade-619602
	I0127 13:19:00.876533  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHPort
	I0127 13:19:00.876712  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.876888  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHKeyPath
	I0127 13:19:00.877033  409466 main.go:134] libmachine: (stopped-upgrade-619602) Calling .GetSSHUsername
	I0127 13:19:00.877188  409466 main.go:134] libmachine: Using SSH client type: native
	I0127 13:19:00.877309  409466 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7dae00] 0x7dde60 <nil>  [] 0s} 192.168.83.52 22 <nil> <nil>}
	I0127 13:19:00.877315  409466 main.go:134] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 13:18:58.420092  408369 addons.go:514] duration metric: took 2.516323ms for enable addons: enabled=[]
	I0127 13:18:58.420125  408369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:18:58.563821  408369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:18:58.581421  408369 node_ready.go:35] waiting up to 6m0s for node "pause-715621" to be "Ready" ...
	I0127 13:18:58.584442  408369 node_ready.go:49] node "pause-715621" has status "Ready":"True"
	I0127 13:18:58.584464  408369 node_ready.go:38] duration metric: took 3.003779ms for node "pause-715621" to be "Ready" ...
	I0127 13:18:58.584477  408369 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:18:58.590435  408369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-648ck" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.977989  408369 pod_ready.go:93] pod "coredns-668d6bf9bc-648ck" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:58.978018  408369 pod_ready.go:82] duration metric: took 387.562268ms for pod "coredns-668d6bf9bc-648ck" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:58.978030  408369 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.378401  408369 pod_ready.go:93] pod "etcd-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:59.378431  408369 pod_ready.go:82] duration metric: took 400.394354ms for pod "etcd-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.378443  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.778435  408369 pod_ready.go:93] pod "kube-apiserver-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:18:59.778458  408369 pod_ready.go:82] duration metric: took 400.006887ms for pod "kube-apiserver-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:18:59.778468  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.178105  408369 pod_ready.go:93] pod "kube-controller-manager-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:19:00.178137  408369 pod_ready.go:82] duration metric: took 399.660963ms for pod "kube-controller-manager-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.178148  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.578377  408369 pod_ready.go:93] pod "kube-proxy-bmqwx" in "kube-system" namespace has status "Ready":"True"
	I0127 13:19:00.578403  408369 pod_ready.go:82] duration metric: took 400.247896ms for pod "kube-proxy-bmqwx" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.578416  408369 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.978461  408369 pod_ready.go:93] pod "kube-scheduler-pause-715621" in "kube-system" namespace has status "Ready":"True"
	I0127 13:19:00.978492  408369 pod_ready.go:82] duration metric: took 400.06632ms for pod "kube-scheduler-pause-715621" in "kube-system" namespace to be "Ready" ...
	I0127 13:19:00.978505  408369 pod_ready.go:39] duration metric: took 2.394015491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:19:00.978525  408369 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:19:00.978614  408369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:19:00.996449  408369 api_server.go:72] duration metric: took 2.578949849s to wait for apiserver process to appear ...
	I0127 13:19:00.996479  408369 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:19:00.996503  408369 api_server.go:253] Checking apiserver healthz at https://192.168.39.99:8443/healthz ...
	I0127 13:19:01.002660  408369 api_server.go:279] https://192.168.39.99:8443/healthz returned 200:
	ok
	I0127 13:19:01.003911  408369 api_server.go:141] control plane version: v1.32.1
	I0127 13:19:01.003939  408369 api_server.go:131] duration metric: took 7.450455ms to wait for apiserver health ...
	I0127 13:19:01.003949  408369 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:19:01.182903  408369 system_pods.go:59] 6 kube-system pods found
	I0127 13:19:01.182931  408369 system_pods.go:61] "coredns-668d6bf9bc-648ck" [565bb444-713e-4588-b9c5-7fca2461c011] Running
	I0127 13:19:01.182935  408369 system_pods.go:61] "etcd-pause-715621" [40996295-2939-4b9c-a962-3b72e1005238] Running
	I0127 13:19:01.182945  408369 system_pods.go:61] "kube-apiserver-pause-715621" [cd262431-be72-4e4c-9734-6f5df33b8801] Running
	I0127 13:19:01.182949  408369 system_pods.go:61] "kube-controller-manager-pause-715621" [e43a90ef-cbfa-4da4-af4f-90c73116f0bd] Running
	I0127 13:19:01.182955  408369 system_pods.go:61] "kube-proxy-bmqwx" [39735320-ee7a-421a-8a86-d98f13ed5917] Running
	I0127 13:19:01.182958  408369 system_pods.go:61] "kube-scheduler-pause-715621" [89abe876-3e9e-495b-9315-95f7b7660b49] Running
	I0127 13:19:01.182963  408369 system_pods.go:74] duration metric: took 179.008588ms to wait for pod list to return data ...
	I0127 13:19:01.182970  408369 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:19:01.378592  408369 default_sa.go:45] found service account: "default"
	I0127 13:19:01.378619  408369 default_sa.go:55] duration metric: took 195.64227ms for default service account to be created ...
	I0127 13:19:01.378629  408369 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:19:01.580474  408369 system_pods.go:87] 6 kube-system pods found
	I0127 13:19:01.778656  408369 system_pods.go:105] "coredns-668d6bf9bc-648ck" [565bb444-713e-4588-b9c5-7fca2461c011] Running
	I0127 13:19:01.778678  408369 system_pods.go:105] "etcd-pause-715621" [40996295-2939-4b9c-a962-3b72e1005238] Running
	I0127 13:19:01.778683  408369 system_pods.go:105] "kube-apiserver-pause-715621" [cd262431-be72-4e4c-9734-6f5df33b8801] Running
	I0127 13:19:01.778688  408369 system_pods.go:105] "kube-controller-manager-pause-715621" [e43a90ef-cbfa-4da4-af4f-90c73116f0bd] Running
	I0127 13:19:01.778694  408369 system_pods.go:105] "kube-proxy-bmqwx" [39735320-ee7a-421a-8a86-d98f13ed5917] Running
	I0127 13:19:01.778699  408369 system_pods.go:105] "kube-scheduler-pause-715621" [89abe876-3e9e-495b-9315-95f7b7660b49] Running
	I0127 13:19:01.778708  408369 system_pods.go:147] duration metric: took 400.071629ms to wait for k8s-apps to be running ...
	I0127 13:19:01.778718  408369 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 13:19:01.778773  408369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:19:01.795632  408369 system_svc.go:56] duration metric: took 16.900901ms WaitForService to wait for kubelet
	I0127 13:19:01.795664  408369 kubeadm.go:582] duration metric: took 3.378170986s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:19:01.795696  408369 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:19:01.978145  408369 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:19:01.978179  408369 node_conditions.go:123] node cpu capacity is 2
	I0127 13:19:01.978194  408369 node_conditions.go:105] duration metric: took 182.492916ms to run NodePressure ...
	I0127 13:19:01.978211  408369 start.go:241] waiting for startup goroutines ...
	I0127 13:19:01.978220  408369 start.go:246] waiting for cluster config update ...
	I0127 13:19:01.978230  408369 start.go:255] writing updated cluster config ...
	I0127 13:19:01.978626  408369 ssh_runner.go:195] Run: rm -f paused
	I0127 13:19:02.029743  408369 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:19:02.032770  408369 out.go:177] * Done! kubectl is now configured to use "pause-715621" cluster and "default" namespace by default
	I0127 13:19:02.231025  409716 start.go:364] duration metric: took 7.427839593s to acquireMachinesLock for "kubernetes-upgrade-511736"
	I0127 13:19:02.231071  409716 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:19:02.231077  409716 fix.go:54] fixHost starting: 
	I0127 13:19:02.231424  409716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:19:02.231470  409716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:19:02.249158  409716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36741
	I0127 13:19:02.249555  409716 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:19:02.250116  409716 main.go:141] libmachine: Using API Version  1
	I0127 13:19:02.250143  409716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:19:02.250527  409716 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:19:02.250788  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	I0127 13:19:02.250940  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .GetState
	I0127 13:19:02.252481  409716 fix.go:112] recreateIfNeeded on kubernetes-upgrade-511736: state=Stopped err=<nil>
	I0127 13:19:02.252520  409716 main.go:141] libmachine: (kubernetes-upgrade-511736) Calling .DriverName
	W0127 13:19:02.252771  409716 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:19:02.254991  409716 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-511736" ...
	
	
	==> CRI-O <==
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.794900092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983944794849128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9844c8cd-44ea-4567-b39b-a81cd9eab8a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.795489315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5193cae-c2e6-466a-8d80-60e8ba7e70fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.795544184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5193cae-c2e6-466a-8d80-60e8ba7e70fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.795819292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5193cae-c2e6-466a-8d80-60e8ba7e70fc name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.843045932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f614741a-0033-4202-b42f-f5130cfa5bc0 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.843115853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f614741a-0033-4202-b42f-f5130cfa5bc0 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.844288367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e655d79c-752e-4c1d-bcc7-aef1b5290664 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.844640413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983944844618824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e655d79c-752e-4c1d-bcc7-aef1b5290664 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.845485404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a0dce27-6ebd-4b59-96d7-461bce241fda name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.845535453Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a0dce27-6ebd-4b59-96d7-461bce241fda name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.845787880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a0dce27-6ebd-4b59-96d7-461bce241fda name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.893754902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4bbd70bc-6d59-46a2-8f4f-a795edbe8a7a name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.893843810Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4bbd70bc-6d59-46a2-8f4f-a795edbe8a7a name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.895466548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b55f1965-5a4e-4967-a065-7484044362b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.895870327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983944895846044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b55f1965-5a4e-4967-a065-7484044362b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.896371768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d611afa8-a567-4d64-a1e2-aa2f5de645e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.896426651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d611afa8-a567-4d64-a1e2-aa2f5de645e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.896669321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d611afa8-a567-4d64-a1e2-aa2f5de645e1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.953033273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fd23065-f34e-428b-bda5-d097b6e6e9c3 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.953176514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fd23065-f34e-428b-bda5-d097b6e6e9c3 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.954368938Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acdbf7ec-dc6f-4325-a34d-9da171e7d30e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.954767189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983944954744221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acdbf7ec-dc6f-4325-a34d-9da171e7d30e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.955583293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ff92840-e2a0-440e-b481-c8c692f1366a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.955655563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ff92840-e2a0-440e-b481-c8c692f1366a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:19:04 pause-715621 crio[2136]: time="2025-01-27 13:19:04.955949307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737983925776076018,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737983921956790208,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737983921925002117,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737983921945070155,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737983921918246526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be,PodSandboxId:a895a94cb42009796cf80050b131e20c7455e401e01dafbcdbca2454a91746a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737983908019861713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603,PodSandboxId:dbe870279382aeb3a8ba0275fa591d6ac57f57ce3465b39157a4ae6489ab428d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1737983907097902336,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e7d9265efa3389bd147d291ff8f118,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes
.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae,PodSandboxId:6691300ff75b66a12267cb4800f2e7af280456684000afb22936bb29bda3c16c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737983906845598640,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff59bea8f7fc9bf7aed60bff0004abcf,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartC
ount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350,PodSandboxId:fa8eceda5d86719404d6b5845941e3f12d8a0d23085417ce740abda66ac1bcd4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1737983906513095123,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e11cca2cf30cbd9d46a86e5a140a051,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c,PodSandboxId:e81ca1ab8afe773ad3b9d82608d773b1842a418eeba0cf33e545a4f8508480fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1737983906605635954,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-715621,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8349e83828d9b3ec60791de6bb9b4a9a,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8,PodSandboxId:77ae7a20fd48d0817bfdd4334733a64150448ae743c3771d670ce4abbbe19a54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1737983906485896238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmqwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39735320-ee7a-421a-8a86-d98f13ed5917,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd,PodSandboxId:f92ad0817295d0fdbf6fb10a94fdb94c3972151c4cb431ef5bf856facb8ae3a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1737983850320922716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-648ck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 565bb444-713e-4588-b9c5-7fca2461c011,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ff92840-e2a0-440e-b481-c8c692f1366a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	86bda3174ef03       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   19 seconds ago       Running             kube-proxy                2                   77ae7a20fd48d       kube-proxy-bmqwx
	4468b121d501a       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   23 seconds ago       Running             kube-apiserver            2                   6691300ff75b6       kube-apiserver-pause-715621
	60e4b02238e77       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   23 seconds ago       Running             kube-scheduler            2                   fa8eceda5d867       kube-scheduler-pause-715621
	4822d278a99ce       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   23 seconds ago       Running             kube-controller-manager   2                   e81ca1ab8afe7       kube-controller-manager-pause-715621
	fc4967cc52bb3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   23 seconds ago       Running             etcd                      2                   dbe870279382a       etcd-pause-715621
	6434e3d8c8aae       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   37 seconds ago       Running             coredns                   1                   a895a94cb4200       coredns-668d6bf9bc-648ck
	912dc0d6d28ff       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   37 seconds ago       Exited              etcd                      1                   dbe870279382a       etcd-pause-715621
	6a4fbb9b1241f       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   38 seconds ago       Exited              kube-apiserver            1                   6691300ff75b6       kube-apiserver-pause-715621
	42775a1d63625       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   38 seconds ago       Exited              kube-controller-manager   1                   e81ca1ab8afe7       kube-controller-manager-pause-715621
	4ad4a5e410ea5       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   38 seconds ago       Exited              kube-scheduler            1                   fa8eceda5d867       kube-scheduler-pause-715621
	e8162dbee2c8f       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   38 seconds ago       Exited              kube-proxy                1                   77ae7a20fd48d       kube-proxy-bmqwx
	98bde6bc2da8d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   f92ad0817295d       coredns-668d6bf9bc-648ck
	
	
	==> coredns [6434e3d8c8aae08d6e911e67fe8ec04a658cea3686a5e5c94e9908dea875b5be] <==
	Trace[1449462543]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (13:18:38.400)
	Trace[1449462543]: [10.000892374s] [10.000892374s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1099040913]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:18:28.399) (total time: 10001ms):
	Trace[1099040913]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:18:38.401)
	Trace[1099040913]: [10.001587277s] [10.001587277s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[515246114]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:18:28.400) (total time: 10001ms):
	Trace[515246114]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:18:38.401)
	Trace[515246114]: [10.001401576s] [10.001401576s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [98bde6bc2da8dd04728dd9801f6c54fe5632014a8f1e563f8375c89df07d25fd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56698 - 46092 "HINFO IN 7416994728805226055.3440515219704200329. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011399899s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[674948108]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:17:30.568) (total time: 30001ms):
	Trace[674948108]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (13:18:00.569)
	Trace[674948108]: [30.001984144s] [30.001984144s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[493040010]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:17:30.568) (total time: 30002ms):
	Trace[493040010]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:18:00.570)
	Trace[493040010]: [30.002500369s] [30.002500369s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1262238197]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Jan-2025 13:17:30.569) (total time: 30002ms):
	Trace[1262238197]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:18:00.571)
	Trace[1262238197]: [30.00299132s] [30.00299132s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-715621
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-715621
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=pause-715621
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_17_25_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:17:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-715621
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:18:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:18:44 +0000   Mon, 27 Jan 2025 13:17:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.99
	  Hostname:    pause-715621
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2fb1984d96c4489ca25d9eb482561209
	  System UUID:                2fb1984d-96c4-489c-a25d-9eb482561209
	  Boot ID:                    9a7cd267-4e10-4a9f-82b7-67bcb0a47f02
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-648ck                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     96s
	  kube-system                 etcd-pause-715621                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-715621             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-pause-715621    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-bmqwx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-pause-715621             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)  kubelet          Node pause-715621 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node pause-715621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node pause-715621 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                100s                 kubelet          Node pause-715621 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node pause-715621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node pause-715621 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node pause-715621 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           97s                  node-controller  Node pause-715621 event: Registered Node pause-715621 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-715621 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-715621 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-715621 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                  node-controller  Node pause-715621 event: Registered Node pause-715621 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.964423] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.073554] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061853] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.235911] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.164735] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.354462] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.848865] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +0.060986] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.419301] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +1.638357] kauditd_printk_skb: 92 callbacks suppressed
	[  +4.937207] systemd-fstab-generator[1235]: Ignoring "noauto" option for root device
	[  +4.428482] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.674705] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 13:18] kauditd_printk_skb: 50 callbacks suppressed
	[ +22.214499] systemd-fstab-generator[2056]: Ignoring "noauto" option for root device
	[  +0.130856] systemd-fstab-generator[2068]: Ignoring "noauto" option for root device
	[  +0.190642] systemd-fstab-generator[2082]: Ignoring "noauto" option for root device
	[  +0.131607] systemd-fstab-generator[2094]: Ignoring "noauto" option for root device
	[  +0.311347] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[  +1.854702] systemd-fstab-generator[2548]: Ignoring "noauto" option for root device
	[  +2.527703] kauditd_printk_skb: 195 callbacks suppressed
	[ +11.741538] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[  +4.595889] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.645363] systemd-fstab-generator[3510]: Ignoring "noauto" option for root device
	
	
	==> etcd [912dc0d6d28ffc712720764a39897ff1fabc48242c93dd619fb52bccf3250603] <==
	{"level":"info","ts":"2025-01-27T13:18:27.715979Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-01-27T13:18:27.747521Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","commit-index":443}
	{"level":"info","ts":"2025-01-27T13:18:27.747782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 switched to configuration voters=()"}
	{"level":"info","ts":"2025-01-27T13:18:27.747840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became follower at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:27.748102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3b7a74ffda0d9c54 [peers: [], term: 2, commit: 443, applied: 0, lastindex: 443, lastterm: 2]"}
	{"level":"warn","ts":"2025-01-27T13:18:27.754300Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-01-27T13:18:27.799527Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":421}
	{"level":"info","ts":"2025-01-27T13:18:27.816611Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-01-27T13:18:27.826355Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3b7a74ffda0d9c54","timeout":"7s"}
	{"level":"info","ts":"2025-01-27T13:18:27.826726Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3b7a74ffda0d9c54"}
	{"level":"info","ts":"2025-01-27T13:18:27.828258Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"3b7a74ffda0d9c54","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-27T13:18:27.828741Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-27T13:18:27.828941Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T13:18:27.829028Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T13:18:27.829043Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T13:18:27.829314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 switched to configuration voters=(4285866637620255828)"}
	{"level":"info","ts":"2025-01-27T13:18:27.829377Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","added-peer-id":"3b7a74ffda0d9c54","added-peer-peer-urls":["https://192.168.39.99:2380"]}
	{"level":"info","ts":"2025-01-27T13:18:27.829492Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:27.829554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:27.832055Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:18:27.843966Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T13:18:27.845430Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"3b7a74ffda0d9c54","initial-advertise-peer-urls":["https://192.168.39.99:2380"],"listen-peer-urls":["https://192.168.39.99:2380"],"advertise-client-urls":["https://192.168.39.99:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.99:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T13:18:27.847170Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T13:18:27.847317Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2025-01-27T13:18:27.847344Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.99:2380"}
	
	
	==> etcd [fc4967cc52bb33e5e235309a487bc02af07bfb549a12afe7c9962c9bccf4fe6f] <==
	{"level":"info","ts":"2025-01-27T13:18:42.343575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 switched to configuration voters=(4285866637620255828)"}
	{"level":"info","ts":"2025-01-27T13:18:42.346477Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","added-peer-id":"3b7a74ffda0d9c54","added-peer-peer-urls":["https://192.168.39.99:2380"]}
	{"level":"info","ts":"2025-01-27T13:18:42.346619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ec756db12d8761b4","local-member-id":"3b7a74ffda0d9c54","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:42.346780Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T13:18:42.350821Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T13:18:42.351205Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2025-01-27T13:18:42.351247Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.99:2380"}
	{"level":"info","ts":"2025-01-27T13:18:42.351354Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"3b7a74ffda0d9c54","initial-advertise-peer-urls":["https://192.168.39.99:2380"],"listen-peer-urls":["https://192.168.39.99:2380"],"advertise-client-urls":["https://192.168.39.99:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.99:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T13:18:42.351787Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T13:18:43.309997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:43.310061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:43.310109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 received MsgPreVoteResp from 3b7a74ffda0d9c54 at term 2"}
	{"level":"info","ts":"2025-01-27T13:18:43.310179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.310190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 received MsgVoteResp from 3b7a74ffda0d9c54 at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.310204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3b7a74ffda0d9c54 became leader at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.310215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3b7a74ffda0d9c54 elected leader 3b7a74ffda0d9c54 at term 3"}
	{"level":"info","ts":"2025-01-27T13:18:43.316696Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"3b7a74ffda0d9c54","local-member-attributes":"{Name:pause-715621 ClientURLs:[https://192.168.39.99:2379]}","request-path":"/0/members/3b7a74ffda0d9c54/attributes","cluster-id":"ec756db12d8761b4","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T13:18:43.316755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:18:43.317316Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T13:18:43.317910Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:18:43.318700Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T13:18:43.319523Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T13:18:43.322217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.99:2379"}
	{"level":"info","ts":"2025-01-27T13:18:43.322306Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T13:18:43.322342Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:19:05 up 2 min,  0 users,  load average: 2.35, 0.78, 0.28
	Linux pause-715621 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4468b121d501a07870c3e9e37a5bba4a9c8675311688f390f1d9b00a710ffbdb] <==
	I0127 13:18:44.751087       1 aggregator.go:171] initial CRD sync complete...
	I0127 13:18:44.751187       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 13:18:44.751215       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 13:18:44.773299       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 13:18:44.784940       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 13:18:44.785092       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 13:18:44.786480       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 13:18:44.786562       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 13:18:44.786569       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 13:18:44.788715       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 13:18:44.793536       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 13:18:44.793980       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 13:18:44.809088       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 13:18:44.809110       1 policy_source.go:240] refreshing policies
	I0127 13:18:44.829411       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 13:18:44.863179       1 cache.go:39] Caches are synced for autoregister controller
	I0127 13:18:45.522325       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 13:18:45.592914       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 13:18:46.193318       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 13:18:46.239536       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 13:18:46.267238       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 13:18:46.273551       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 13:18:48.204546       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 13:18:48.264641       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 13:18:53.335198       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [6a4fbb9b1241f8f117fbcd4b910b78781296c58ad30f2e2694f896a3de1ce3ae] <==
	W0127 13:18:27.454859       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 13:18:27.455401       1 options.go:238] external host was not specified, using 192.168.39.99
	I0127 13:18:27.465337       1 server.go:143] Version: v1.32.1
	I0127 13:18:27.465424       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:28.713327       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0127 13:18:28.716097       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:28.720674       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0127 13:18:28.733453       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 13:18:28.740759       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 13:18:28.740913       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 13:18:28.741118       1 instance.go:233] Using reconciler: lease
	W0127 13:18:28.742298       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:29.719896       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:29.732391       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:29.743070       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:31.047743       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:31.110439       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:31.534687       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:33.230500       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:33.287550       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:34.074342       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:37.466609       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:18:37.644802       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [42775a1d636252a8b74c013727bc2a6203d5e2a0c073128e2a84413432e6241c] <==
	I0127 13:18:28.619987       1 serving.go:386] Generated self-signed cert in-memory
	I0127 13:18:28.940685       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 13:18:28.940771       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:28.942708       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 13:18:28.942846       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 13:18:28.943378       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 13:18:28.943470       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [4822d278a99cec00105497491ff859f190499d585c701c6a778624184da7ab2d] <==
	I0127 13:18:47.937573       1 shared_informer.go:320] Caches are synced for HPA
	I0127 13:18:47.937583       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 13:18:47.938104       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 13:18:47.939035       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 13:18:47.939250       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 13:18:47.939314       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 13:18:47.939468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="86.738µs"
	I0127 13:18:47.939794       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 13:18:47.945335       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 13:18:47.952758       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 13:18:47.954201       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 13:18:47.955416       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 13:18:47.957715       1 shared_informer.go:320] Caches are synced for TTL
	I0127 13:18:47.963417       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 13:18:47.963488       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 13:18:47.963496       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 13:18:47.964100       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 13:18:47.966806       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 13:18:47.971703       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 13:18:47.977214       1 shared_informer.go:320] Caches are synced for service account
	I0127 13:18:47.987518       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 13:18:47.990961       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 13:18:53.344398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="40.215239ms"
	I0127 13:18:53.364853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.292216ms"
	I0127 13:18:53.366892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="103.32µs"
	
	
	==> kube-proxy [86bda3174ef036023470f7fa1477a15abba05ebac9a7da0ebbe97055bc4c2434] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:18:45.933276       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:18:45.945519       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.99"]
	E0127 13:18:45.945648       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:18:45.978722       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:18:45.978763       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:18:45.978784       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:18:45.981758       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:18:45.982038       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:18:45.982068       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:45.983830       1 config.go:199] "Starting service config controller"
	I0127 13:18:45.983882       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:18:45.983905       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:18:45.983909       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:18:45.984499       1 config.go:329] "Starting node config controller"
	I0127 13:18:45.984529       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:18:46.084069       1 shared_informer.go:320] Caches are synced for service config
	I0127 13:18:46.084108       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 13:18:46.084701       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8] <==
	I0127 13:18:28.144528       1 server_linux.go:66] "Using iptables proxy"
	E0127 13:18:28.547441       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:18:28.698285       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:18:39.517215       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-715621\": dial tcp 192.168.39.99:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.99:45130->192.168.39.99:8443: read: connection reset by peer"
	
	
	==> kube-scheduler [4ad4a5e410ea58f89a48d94ab597dac8b3335c5ce334398970ec597c1b07b350] <==
	I0127 13:18:28.878426       1 serving.go:386] Generated self-signed cert in-memory
	W0127 13:18:39.514872       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.99:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.99:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.99:45160->192.168.39.99:8443: read: connection reset by peer
	W0127 13:18:39.514907       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 13:18:39.514918       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 13:18:39.532896       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 13:18:39.533530       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0127 13:18:39.533604       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0127 13:18:39.537209       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 13:18:39.537295       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 13:18:39.537346       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0127 13:18:39.537939       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 13:18:39.538060       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 13:18:39.538295       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 13:18:39.538389       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 13:18:39.538432       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0127 13:18:39.539477       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E0127 13:18:39.540645       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [60e4b02238e772a4ca45ca7100897361dd29ace0f2f61dcb09fed2c8313e3491] <==
	I0127 13:18:43.000783       1 serving.go:386] Generated self-signed cert in-memory
	W0127 13:18:44.648086       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 13:18:44.648231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 13:18:44.648317       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 13:18:44.648341       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 13:18:44.724733       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 13:18:44.727228       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:18:44.733849       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 13:18:44.736229       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 13:18:44.737291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 13:18:44.737397       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 13:18:44.836637       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.621806    3079 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-715621\" not found" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.621912    3079 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-715621\" not found" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.663035    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.825928    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-715621\" already exists" pod="kube-system/kube-apiserver-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.826060    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.839902    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-715621\" already exists" pod="kube-system/kube-controller-manager-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.840026    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.849271    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-715621\" already exists" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.849374    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.852736    3079 kubelet_node_status.go:125] "Node was previously registered" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.852870    3079 kubelet_node_status.go:79] "Successfully registered node" node="pause-715621"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.852977    3079 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: I0127 13:18:44.854276    3079 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 27 13:18:44 pause-715621 kubelet[3079]: E0127 13:18:44.862193    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-715621\" already exists" pod="kube-system/etcd-pause-715621"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.452689    3079 apiserver.go:52] "Watching apiserver"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.465885    3079 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.512069    3079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39735320-ee7a-421a-8a86-d98f13ed5917-xtables-lock\") pod \"kube-proxy-bmqwx\" (UID: \"39735320-ee7a-421a-8a86-d98f13ed5917\") " pod="kube-system/kube-proxy-bmqwx"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.512297    3079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39735320-ee7a-421a-8a86-d98f13ed5917-lib-modules\") pod \"kube-proxy-bmqwx\" (UID: \"39735320-ee7a-421a-8a86-d98f13ed5917\") " pod="kube-system/kube-proxy-bmqwx"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.626106    3079 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: E0127 13:18:45.638864    3079 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-715621\" already exists" pod="kube-system/kube-scheduler-pause-715621"
	Jan 27 13:18:45 pause-715621 kubelet[3079]: I0127 13:18:45.761299    3079 scope.go:117] "RemoveContainer" containerID="e8162dbee2c8f6a9c621da00459489b3af1b732696bf5ac65a45664c158e84f8"
	Jan 27 13:18:51 pause-715621 kubelet[3079]: E0127 13:18:51.591825    3079 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983931591109193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:18:51 pause-715621 kubelet[3079]: E0127 13:18:51.592377    3079 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983931591109193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:19:01 pause-715621 kubelet[3079]: E0127 13:19:01.593700    3079 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983941593460909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:19:01 pause-715621 kubelet[3079]: E0127 13:19:01.593750    3079 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737983941593460909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-715621 -n pause-715621
helpers_test.go:261: (dbg) Run:  kubectl --context pause-715621 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (88.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-838260 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-838260 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.65332598s)

                                                
                                                
-- stdout --
	* [old-k8s-version-838260] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-838260" primary control-plane node in "old-k8s-version-838260" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:24:02.618227  420607 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:24:02.618454  420607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:02.618492  420607 out.go:358] Setting ErrFile to fd 2...
	I0127 13:24:02.618514  420607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:24:02.618843  420607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:24:02.619739  420607 out.go:352] Setting JSON to false
	I0127 13:24:02.621435  420607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21983,"bootTime":1737962260,"procs":375,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:24:02.621565  420607 start.go:139] virtualization: kvm guest
	I0127 13:24:02.624366  420607 out.go:177] * [old-k8s-version-838260] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:24:02.625993  420607 notify.go:220] Checking for updates...
	I0127 13:24:02.626085  420607 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:24:02.627533  420607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:24:02.629023  420607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:24:02.630447  420607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:24:02.632144  420607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:24:02.633548  420607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:24:02.635632  420607 config.go:182] Loaded profile config "bridge-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:24:02.635837  420607 config.go:182] Loaded profile config "enable-default-cni-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:24:02.636013  420607 config.go:182] Loaded profile config "flannel-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:24:02.636184  420607 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:24:02.692480  420607 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 13:24:02.693827  420607 start.go:297] selected driver: kvm2
	I0127 13:24:02.693846  420607 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:24:02.693862  420607 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:24:02.694728  420607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:24:02.694843  420607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:24:02.714227  420607 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:24:02.714297  420607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 13:24:02.714626  420607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:24:02.714671  420607 cni.go:84] Creating CNI manager for ""
	I0127 13:24:02.714742  420607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:24:02.714758  420607 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 13:24:02.714828  420607 start.go:340] cluster config:
	{Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:02.714951  420607 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:24:02.716777  420607 out.go:177] * Starting "old-k8s-version-838260" primary control-plane node in "old-k8s-version-838260" cluster
	I0127 13:24:02.718068  420607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:24:02.718115  420607 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 13:24:02.718126  420607 cache.go:56] Caching tarball of preloaded images
	I0127 13:24:02.718298  420607 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:24:02.718316  420607 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 13:24:02.718453  420607 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/config.json ...
	I0127 13:24:02.718478  420607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/config.json: {Name:mka21c91528d93d7de4c2597f7776b57f61f2a18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:02.718669  420607 start.go:360] acquireMachinesLock for old-k8s-version-838260: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:24:02.718726  420607 start.go:364] duration metric: took 35.504µs to acquireMachinesLock for "old-k8s-version-838260"
	I0127 13:24:02.718751  420607 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-838260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:24:02.718844  420607 start.go:125] createHost starting for "" (driver="kvm2")
	I0127 13:24:02.721273  420607 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 13:24:02.721524  420607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:24:02.721592  420607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:24:02.742207  420607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0127 13:24:02.742777  420607 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:24:02.743515  420607 main.go:141] libmachine: Using API Version  1
	I0127 13:24:02.743538  420607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:24:02.744223  420607 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:24:02.744445  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetMachineName
	I0127 13:24:02.744627  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:02.744805  420607 start.go:159] libmachine.API.Create for "old-k8s-version-838260" (driver="kvm2")
	I0127 13:24:02.744837  420607 client.go:168] LocalClient.Create starting
	I0127 13:24:02.744873  420607 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem
	I0127 13:24:02.744915  420607 main.go:141] libmachine: Decoding PEM data...
	I0127 13:24:02.744937  420607 main.go:141] libmachine: Parsing certificate...
	I0127 13:24:02.745008  420607 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem
	I0127 13:24:02.745043  420607 main.go:141] libmachine: Decoding PEM data...
	I0127 13:24:02.745060  420607 main.go:141] libmachine: Parsing certificate...
	I0127 13:24:02.745097  420607 main.go:141] libmachine: Running pre-create checks...
	I0127 13:24:02.745111  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .PreCreateCheck
	I0127 13:24:02.745460  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetConfigRaw
	I0127 13:24:02.745854  420607 main.go:141] libmachine: Creating machine...
	I0127 13:24:02.745867  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .Create
	I0127 13:24:02.745999  420607 main.go:141] libmachine: (old-k8s-version-838260) creating KVM machine...
	I0127 13:24:02.746015  420607 main.go:141] libmachine: (old-k8s-version-838260) creating network...
	I0127 13:24:02.747734  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found existing default KVM network
	I0127 13:24:02.749482  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:02.749284  420637 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:e5:09} reservation:<nil>}
	I0127 13:24:02.750893  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:02.750789  420637 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:07:6e} reservation:<nil>}
	I0127 13:24:02.752600  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:02.752496  420637 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000382990}
	I0127 13:24:02.752626  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | created network xml: 
	I0127 13:24:02.752702  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | <network>
	I0127 13:24:02.752723  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |   <name>mk-old-k8s-version-838260</name>
	I0127 13:24:02.752737  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |   <dns enable='no'/>
	I0127 13:24:02.752758  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |   
	I0127 13:24:02.752773  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0127 13:24:02.752789  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |     <dhcp>
	I0127 13:24:02.752804  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0127 13:24:02.752817  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |     </dhcp>
	I0127 13:24:02.752827  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |   </ip>
	I0127 13:24:02.752835  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG |   
	I0127 13:24:02.752845  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | </network>
	I0127 13:24:02.752856  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | 
	I0127 13:24:02.762202  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | trying to create private KVM network mk-old-k8s-version-838260 192.168.61.0/24...
	I0127 13:24:02.850703  420607 main.go:141] libmachine: (old-k8s-version-838260) setting up store path in /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260 ...
	I0127 13:24:02.850733  420607 main.go:141] libmachine: (old-k8s-version-838260) building disk image from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 13:24:02.850769  420607 main.go:141] libmachine: (old-k8s-version-838260) Downloading /home/jenkins/minikube-integration/20317-361578/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 13:24:02.850931  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | private KVM network mk-old-k8s-version-838260 192.168.61.0/24 created
	I0127 13:24:02.850961  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:02.847752  420637 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:24:03.182928  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:03.182762  420637 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa...
	I0127 13:24:03.306487  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:03.306293  420637 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/old-k8s-version-838260.rawdisk...
	I0127 13:24:03.306675  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | Writing magic tar header
	I0127 13:24:03.306699  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | Writing SSH key tar header
	I0127 13:24:03.306781  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:03.306717  420637 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260 ...
	I0127 13:24:03.306880  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260
	I0127 13:24:03.306908  420607 main.go:141] libmachine: (old-k8s-version-838260) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260 (perms=drwx------)
	I0127 13:24:03.306919  420607 main.go:141] libmachine: (old-k8s-version-838260) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube/machines (perms=drwxr-xr-x)
	I0127 13:24:03.306936  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube/machines
	I0127 13:24:03.306949  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:24:03.306959  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20317-361578
	I0127 13:24:03.306970  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0127 13:24:03.306977  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | checking permissions on dir: /home/jenkins
	I0127 13:24:03.306986  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | checking permissions on dir: /home
	I0127 13:24:03.306993  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | skipping /home - not owner
	I0127 13:24:03.307008  420607 main.go:141] libmachine: (old-k8s-version-838260) setting executable bit set on /home/jenkins/minikube-integration/20317-361578/.minikube (perms=drwxr-xr-x)
	I0127 13:24:03.307018  420607 main.go:141] libmachine: (old-k8s-version-838260) setting executable bit set on /home/jenkins/minikube-integration/20317-361578 (perms=drwxrwxr-x)
	I0127 13:24:03.307028  420607 main.go:141] libmachine: (old-k8s-version-838260) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0127 13:24:03.307036  420607 main.go:141] libmachine: (old-k8s-version-838260) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0127 13:24:03.307045  420607 main.go:141] libmachine: (old-k8s-version-838260) creating domain...
	I0127 13:24:03.308358  420607 main.go:141] libmachine: (old-k8s-version-838260) define libvirt domain using xml: 
	I0127 13:24:03.308381  420607 main.go:141] libmachine: (old-k8s-version-838260) <domain type='kvm'>
	I0127 13:24:03.308391  420607 main.go:141] libmachine: (old-k8s-version-838260)   <name>old-k8s-version-838260</name>
	I0127 13:24:03.308399  420607 main.go:141] libmachine: (old-k8s-version-838260)   <memory unit='MiB'>2200</memory>
	I0127 13:24:03.308408  420607 main.go:141] libmachine: (old-k8s-version-838260)   <vcpu>2</vcpu>
	I0127 13:24:03.308415  420607 main.go:141] libmachine: (old-k8s-version-838260)   <features>
	I0127 13:24:03.308426  420607 main.go:141] libmachine: (old-k8s-version-838260)     <acpi/>
	I0127 13:24:03.308436  420607 main.go:141] libmachine: (old-k8s-version-838260)     <apic/>
	I0127 13:24:03.308458  420607 main.go:141] libmachine: (old-k8s-version-838260)     <pae/>
	I0127 13:24:03.308468  420607 main.go:141] libmachine: (old-k8s-version-838260)     
	I0127 13:24:03.308477  420607 main.go:141] libmachine: (old-k8s-version-838260)   </features>
	I0127 13:24:03.308487  420607 main.go:141] libmachine: (old-k8s-version-838260)   <cpu mode='host-passthrough'>
	I0127 13:24:03.308493  420607 main.go:141] libmachine: (old-k8s-version-838260)   
	I0127 13:24:03.308497  420607 main.go:141] libmachine: (old-k8s-version-838260)   </cpu>
	I0127 13:24:03.308502  420607 main.go:141] libmachine: (old-k8s-version-838260)   <os>
	I0127 13:24:03.308508  420607 main.go:141] libmachine: (old-k8s-version-838260)     <type>hvm</type>
	I0127 13:24:03.308513  420607 main.go:141] libmachine: (old-k8s-version-838260)     <boot dev='cdrom'/>
	I0127 13:24:03.308518  420607 main.go:141] libmachine: (old-k8s-version-838260)     <boot dev='hd'/>
	I0127 13:24:03.308523  420607 main.go:141] libmachine: (old-k8s-version-838260)     <bootmenu enable='no'/>
	I0127 13:24:03.308529  420607 main.go:141] libmachine: (old-k8s-version-838260)   </os>
	I0127 13:24:03.308534  420607 main.go:141] libmachine: (old-k8s-version-838260)   <devices>
	I0127 13:24:03.308541  420607 main.go:141] libmachine: (old-k8s-version-838260)     <disk type='file' device='cdrom'>
	I0127 13:24:03.308553  420607 main.go:141] libmachine: (old-k8s-version-838260)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/boot2docker.iso'/>
	I0127 13:24:03.308563  420607 main.go:141] libmachine: (old-k8s-version-838260)       <target dev='hdc' bus='scsi'/>
	I0127 13:24:03.308572  420607 main.go:141] libmachine: (old-k8s-version-838260)       <readonly/>
	I0127 13:24:03.308581  420607 main.go:141] libmachine: (old-k8s-version-838260)     </disk>
	I0127 13:24:03.308590  420607 main.go:141] libmachine: (old-k8s-version-838260)     <disk type='file' device='disk'>
	I0127 13:24:03.308603  420607 main.go:141] libmachine: (old-k8s-version-838260)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0127 13:24:03.308623  420607 main.go:141] libmachine: (old-k8s-version-838260)       <source file='/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/old-k8s-version-838260.rawdisk'/>
	I0127 13:24:03.308635  420607 main.go:141] libmachine: (old-k8s-version-838260)       <target dev='hda' bus='virtio'/>
	I0127 13:24:03.308645  420607 main.go:141] libmachine: (old-k8s-version-838260)     </disk>
	I0127 13:24:03.308656  420607 main.go:141] libmachine: (old-k8s-version-838260)     <interface type='network'>
	I0127 13:24:03.308670  420607 main.go:141] libmachine: (old-k8s-version-838260)       <source network='mk-old-k8s-version-838260'/>
	I0127 13:24:03.308680  420607 main.go:141] libmachine: (old-k8s-version-838260)       <model type='virtio'/>
	I0127 13:24:03.308688  420607 main.go:141] libmachine: (old-k8s-version-838260)     </interface>
	I0127 13:24:03.308696  420607 main.go:141] libmachine: (old-k8s-version-838260)     <interface type='network'>
	I0127 13:24:03.308702  420607 main.go:141] libmachine: (old-k8s-version-838260)       <source network='default'/>
	I0127 13:24:03.308708  420607 main.go:141] libmachine: (old-k8s-version-838260)       <model type='virtio'/>
	I0127 13:24:03.308713  420607 main.go:141] libmachine: (old-k8s-version-838260)     </interface>
	I0127 13:24:03.308720  420607 main.go:141] libmachine: (old-k8s-version-838260)     <serial type='pty'>
	I0127 13:24:03.308725  420607 main.go:141] libmachine: (old-k8s-version-838260)       <target port='0'/>
	I0127 13:24:03.308734  420607 main.go:141] libmachine: (old-k8s-version-838260)     </serial>
	I0127 13:24:03.308742  420607 main.go:141] libmachine: (old-k8s-version-838260)     <console type='pty'>
	I0127 13:24:03.308746  420607 main.go:141] libmachine: (old-k8s-version-838260)       <target type='serial' port='0'/>
	I0127 13:24:03.308753  420607 main.go:141] libmachine: (old-k8s-version-838260)     </console>
	I0127 13:24:03.308757  420607 main.go:141] libmachine: (old-k8s-version-838260)     <rng model='virtio'>
	I0127 13:24:03.308763  420607 main.go:141] libmachine: (old-k8s-version-838260)       <backend model='random'>/dev/random</backend>
	I0127 13:24:03.308769  420607 main.go:141] libmachine: (old-k8s-version-838260)     </rng>
	I0127 13:24:03.308773  420607 main.go:141] libmachine: (old-k8s-version-838260)     
	I0127 13:24:03.308777  420607 main.go:141] libmachine: (old-k8s-version-838260)     
	I0127 13:24:03.308781  420607 main.go:141] libmachine: (old-k8s-version-838260)   </devices>
	I0127 13:24:03.308785  420607 main.go:141] libmachine: (old-k8s-version-838260) </domain>
	I0127 13:24:03.308792  420607 main.go:141] libmachine: (old-k8s-version-838260) 
	I0127 13:24:03.421811  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:f7:a1:f9 in network default
	I0127 13:24:03.422714  420607 main.go:141] libmachine: (old-k8s-version-838260) starting domain...
	I0127 13:24:03.422736  420607 main.go:141] libmachine: (old-k8s-version-838260) ensuring networks are active...
	I0127 13:24:03.422767  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:03.423744  420607 main.go:141] libmachine: (old-k8s-version-838260) Ensuring network default is active
	I0127 13:24:03.423990  420607 main.go:141] libmachine: (old-k8s-version-838260) Ensuring network mk-old-k8s-version-838260 is active
	I0127 13:24:03.446625  420607 main.go:141] libmachine: (old-k8s-version-838260) getting domain XML...
	I0127 13:24:03.447299  420607 main.go:141] libmachine: (old-k8s-version-838260) creating domain...
	I0127 13:24:05.843947  420607 main.go:141] libmachine: (old-k8s-version-838260) waiting for IP...
	I0127 13:24:05.844704  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:05.845335  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:05.845389  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:05.845329  420637 retry.go:31] will retry after 251.067487ms: waiting for domain to come up
	I0127 13:24:06.097719  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:06.098230  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:06.098260  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:06.098203  420637 retry.go:31] will retry after 332.885075ms: waiting for domain to come up
	I0127 13:24:06.432861  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:06.433755  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:06.433780  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:06.433727  420637 retry.go:31] will retry after 456.49937ms: waiting for domain to come up
	I0127 13:24:06.891385  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:06.891877  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:06.891920  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:06.891870  420637 retry.go:31] will retry after 378.105928ms: waiting for domain to come up
	I0127 13:24:07.271470  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:07.272100  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:07.272134  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:07.272075  420637 retry.go:31] will retry after 563.378131ms: waiting for domain to come up
	I0127 13:24:07.837440  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:07.838083  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:07.838126  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:07.838043  420637 retry.go:31] will retry after 834.641155ms: waiting for domain to come up
	I0127 13:24:08.674012  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:08.674519  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:08.674628  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:08.674524  420637 retry.go:31] will retry after 1.166891934s: waiting for domain to come up
	I0127 13:24:09.842742  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:09.843347  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:09.843377  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:09.843328  420637 retry.go:31] will retry after 929.5377ms: waiting for domain to come up
	I0127 13:24:10.775012  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:10.775560  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:10.775582  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:10.775536  420637 retry.go:31] will retry after 1.58446759s: waiting for domain to come up
	I0127 13:24:12.362202  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:12.362791  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:12.362823  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:12.362740  420637 retry.go:31] will retry after 2.082148938s: waiting for domain to come up
	I0127 13:24:14.447487  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:14.447899  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:14.447919  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:14.447864  420637 retry.go:31] will retry after 1.885925944s: waiting for domain to come up
	I0127 13:24:16.335839  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:16.336330  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:16.336362  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:16.336299  420637 retry.go:31] will retry after 3.514437581s: waiting for domain to come up
	I0127 13:24:19.852414  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:19.852933  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:19.852964  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:19.852871  420637 retry.go:31] will retry after 3.483835815s: waiting for domain to come up
	I0127 13:24:23.337846  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:23.338244  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:24:23.338275  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:24:23.338211  420637 retry.go:31] will retry after 3.715469213s: waiting for domain to come up
	I0127 13:24:27.055989  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.056556  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has current primary IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.056580  420607 main.go:141] libmachine: (old-k8s-version-838260) found domain IP: 192.168.61.159
	I0127 13:24:27.056598  420607 main.go:141] libmachine: (old-k8s-version-838260) reserving static IP address...
	I0127 13:24:27.056920  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-838260", mac: "52:54:00:9d:07:25", ip: "192.168.61.159"} in network mk-old-k8s-version-838260
	I0127 13:24:27.136100  420607 main.go:141] libmachine: (old-k8s-version-838260) reserved static IP address 192.168.61.159 for domain old-k8s-version-838260
	I0127 13:24:27.136135  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | Getting to WaitForSSH function...
	I0127 13:24:27.136153  420607 main.go:141] libmachine: (old-k8s-version-838260) waiting for SSH...
	I0127 13:24:27.139133  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.139608  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.139645  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.139816  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | Using SSH client type: external
	I0127 13:24:27.139841  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa (-rw-------)
	I0127 13:24:27.139882  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:24:27.139896  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | About to run SSH command:
	I0127 13:24:27.139909  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | exit 0
	I0127 13:24:27.262397  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | SSH cmd err, output: <nil>: 
	I0127 13:24:27.262724  420607 main.go:141] libmachine: (old-k8s-version-838260) KVM machine creation complete
	I0127 13:24:27.263056  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetConfigRaw
	I0127 13:24:27.263628  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:27.263856  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:27.264047  420607 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 13:24:27.264080  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetState
	I0127 13:24:27.265356  420607 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 13:24:27.265381  420607 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 13:24:27.265388  420607 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 13:24:27.265400  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:27.267850  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.268213  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.268237  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.268414  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:27.268604  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.268763  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.268899  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:27.269033  420607 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:27.269223  420607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:24:27.269234  420607 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 13:24:27.369683  420607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:24:27.369712  420607 main.go:141] libmachine: Detecting the provisioner...
	I0127 13:24:27.369723  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:27.372416  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.372780  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.372811  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.372917  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:27.373085  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.373239  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.373421  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:27.373628  420607 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:27.373795  420607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:24:27.373806  420607 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 13:24:27.479134  420607 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0127 13:24:27.479236  420607 main.go:141] libmachine: found compatible host: buildroot
	I0127 13:24:27.479248  420607 main.go:141] libmachine: Provisioning with buildroot...
	I0127 13:24:27.479256  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetMachineName
	I0127 13:24:27.479545  420607 buildroot.go:166] provisioning hostname "old-k8s-version-838260"
	I0127 13:24:27.479578  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetMachineName
	I0127 13:24:27.479768  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:27.482484  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.482904  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.482935  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.483055  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:27.483241  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.483377  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.483531  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:27.483679  420607 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:27.483846  420607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:24:27.483864  420607 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-838260 && echo "old-k8s-version-838260" | sudo tee /etc/hostname
	I0127 13:24:27.603630  420607 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-838260
	
	I0127 13:24:27.603675  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:27.606748  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.607091  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.607137  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.607268  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:27.607455  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.607598  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.607712  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:27.607869  420607 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:27.608095  420607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:24:27.608113  420607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-838260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-838260/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-838260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:24:27.724725  420607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:24:27.724776  420607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:24:27.724823  420607 buildroot.go:174] setting up certificates
	I0127 13:24:27.724842  420607 provision.go:84] configureAuth start
	I0127 13:24:27.724862  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetMachineName
	I0127 13:24:27.725177  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:24:27.728220  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.728662  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.728683  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.728889  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:27.731274  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.731609  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.731646  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.731731  420607 provision.go:143] copyHostCerts
	I0127 13:24:27.731806  420607 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:24:27.731819  420607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:24:27.731895  420607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:24:27.732001  420607 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:24:27.732012  420607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:24:27.732055  420607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:24:27.732132  420607 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:24:27.732143  420607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:24:27.732176  420607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:24:27.732241  420607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-838260 san=[127.0.0.1 192.168.61.159 localhost minikube old-k8s-version-838260]
	I0127 13:24:27.972453  420607 provision.go:177] copyRemoteCerts
	I0127 13:24:27.972534  420607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:24:27.972570  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:27.975443  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.975823  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:27.975851  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:27.976026  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:27.976276  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:27.976462  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:27.976610  420607 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:24:28.057835  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 13:24:28.086623  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:24:28.115490  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 13:24:28.151097  420607 provision.go:87] duration metric: took 426.234142ms to configureAuth
	I0127 13:24:28.151146  420607 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:24:28.151380  420607 config.go:182] Loaded profile config "old-k8s-version-838260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 13:24:28.151486  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:28.155401  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.155710  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.155777  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.155949  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:28.156212  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:28.156441  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:28.156649  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:28.156960  420607 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:28.157227  420607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:24:28.157264  420607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:24:28.418763  420607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:24:28.418843  420607 main.go:141] libmachine: Checking connection to Docker...
	I0127 13:24:28.418856  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetURL
	I0127 13:24:28.420158  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | using libvirt version 6000000
	I0127 13:24:28.422675  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.423133  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.423170  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.423374  420607 main.go:141] libmachine: Docker is up and running!
	I0127 13:24:28.423392  420607 main.go:141] libmachine: Reticulating splines...
	I0127 13:24:28.423400  420607 client.go:171] duration metric: took 25.678553548s to LocalClient.Create
	I0127 13:24:28.423426  420607 start.go:167] duration metric: took 25.678641229s to libmachine.API.Create "old-k8s-version-838260"
	I0127 13:24:28.423440  420607 start.go:293] postStartSetup for "old-k8s-version-838260" (driver="kvm2")
	I0127 13:24:28.423455  420607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:24:28.423483  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:28.423736  420607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:24:28.423770  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:28.425999  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.426300  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.426332  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.426465  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:28.426674  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:28.426823  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:28.426946  420607 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:24:28.511765  420607 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:24:28.516739  420607 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:24:28.516773  420607 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:24:28.516860  420607 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:24:28.516976  420607 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:24:28.517111  420607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:24:28.527732  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:24:28.558130  420607 start.go:296] duration metric: took 134.670403ms for postStartSetup
	I0127 13:24:28.558202  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetConfigRaw
	I0127 13:24:28.558916  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:24:28.561729  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.562099  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.562139  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.562398  420607 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/config.json ...
	I0127 13:24:28.562619  420607 start.go:128] duration metric: took 25.84376017s to createHost
	I0127 13:24:28.562646  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:28.564854  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.565154  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.565189  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.565308  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:28.565510  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:28.565693  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:28.565827  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:28.566005  420607 main.go:141] libmachine: Using SSH client type: native
	I0127 13:24:28.566185  420607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:24:28.566198  420607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:24:28.668232  420607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984268.640573987
	
	I0127 13:24:28.668258  420607 fix.go:216] guest clock: 1737984268.640573987
	I0127 13:24:28.668266  420607 fix.go:229] Guest: 2025-01-27 13:24:28.640573987 +0000 UTC Remote: 2025-01-27 13:24:28.562633065 +0000 UTC m=+26.020537052 (delta=77.940922ms)
	I0127 13:24:28.668321  420607 fix.go:200] guest clock delta is within tolerance: 77.940922ms
	I0127 13:24:28.668329  420607 start.go:83] releasing machines lock for "old-k8s-version-838260", held for 25.94958992s
	I0127 13:24:28.668364  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:28.669236  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:24:28.673033  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.673542  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.673572  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.673784  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:28.674290  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:28.674477  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:24:28.674561  420607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:24:28.674628  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:28.674735  420607 ssh_runner.go:195] Run: cat /version.json
	I0127 13:24:28.674755  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:24:28.677708  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.677944  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.678256  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.678295  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.678440  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:28.678565  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:28.678646  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:28.678756  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:28.678853  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:28.678858  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:24:28.679014  420607 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:24:28.679343  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:24:28.679553  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:24:28.679726  420607 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:24:28.789172  420607 ssh_runner.go:195] Run: systemctl --version
	I0127 13:24:28.799421  420607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:24:28.972514  420607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:24:28.980375  420607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:24:28.980479  420607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:24:28.999735  420607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:24:28.999764  420607 start.go:495] detecting cgroup driver to use...
	I0127 13:24:28.999845  420607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:24:29.018526  420607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:24:29.034812  420607 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:24:29.034931  420607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:24:29.050336  420607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:24:29.071845  420607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:24:29.196352  420607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:24:29.394323  420607 docker.go:233] disabling docker service ...
	I0127 13:24:29.394421  420607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:24:29.414502  420607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:24:29.429522  420607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:24:29.607407  420607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:24:29.772911  420607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:24:29.787934  420607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:24:29.809283  420607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 13:24:29.809348  420607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:24:29.820342  420607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:24:29.820405  420607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:24:29.834340  420607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:24:29.848637  420607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:24:29.863138  420607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:24:29.874802  420607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:24:29.884661  420607 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:24:29.884718  420607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:24:29.899184  420607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:24:29.909409  420607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:24:30.056394  420607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:24:30.180952  420607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:24:30.181031  420607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:24:30.187847  420607 start.go:563] Will wait 60s for crictl version
	I0127 13:24:30.187913  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:30.192335  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:24:30.234999  420607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:24:30.235098  420607 ssh_runner.go:195] Run: crio --version
	I0127 13:24:30.271995  420607 ssh_runner.go:195] Run: crio --version
	I0127 13:24:30.310366  420607 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 13:24:30.311442  420607 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:24:30.321461  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:30.321936  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:24:20 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:24:30.321974  420607 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:24:30.322478  420607 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 13:24:30.329885  420607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:24:30.354991  420607 kubeadm.go:883] updating cluster {Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:24:30.355169  420607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:24:30.355242  420607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:24:30.408536  420607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 13:24:30.408606  420607 ssh_runner.go:195] Run: which lz4
	I0127 13:24:30.414109  420607 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:24:30.419831  420607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:24:30.419859  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 13:24:32.362021  420607 crio.go:462] duration metric: took 1.947941266s to copy over tarball
	I0127 13:24:32.362131  420607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:24:35.434335  420607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.072160742s)
	I0127 13:24:35.434376  420607 crio.go:469] duration metric: took 3.072315622s to extract the tarball
	I0127 13:24:35.434388  420607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:24:35.490623  420607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:24:35.555742  420607 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 13:24:35.555772  420607 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 13:24:35.555855  420607 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:24:35.555888  420607 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:24:35.555870  420607 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:24:35.555975  420607 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:24:35.556139  420607 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:24:35.556160  420607 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 13:24:35.556291  420607 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:24:35.556385  420607 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 13:24:35.557452  420607 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:24:35.557498  420607 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:24:35.558173  420607 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 13:24:35.558194  420607 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:24:35.558353  420607 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 13:24:35.558379  420607 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:24:35.558482  420607 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:24:35.558490  420607 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:24:35.702474  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:24:35.703209  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:24:35.712937  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:24:35.716252  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:24:35.725772  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 13:24:35.726214  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 13:24:35.727557  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 13:24:35.889505  420607 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 13:24:35.889591  420607 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:24:35.889645  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:35.908226  420607 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 13:24:35.908283  420607 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:24:35.908340  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:35.934119  420607 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 13:24:35.934183  420607 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:24:35.934236  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:35.948359  420607 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 13:24:35.948453  420607 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:24:35.948461  420607 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 13:24:35.948494  420607 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 13:24:35.948507  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:35.948391  420607 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 13:24:35.948535  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:35.948549  420607 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:24:35.948590  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:35.960148  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:24:35.960199  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:24:35.960228  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:24:35.960229  420607 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 13:24:35.960301  420607 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 13:24:35.960302  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:24:35.960344  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:24:35.960345  420607 ssh_runner.go:195] Run: which crictl
	I0127 13:24:35.960393  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:24:36.108048  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:24:36.108058  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:24:36.108158  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:24:36.108204  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:24:36.108136  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:24:36.108235  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:24:36.110642  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:24:36.256690  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:24:36.275750  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:24:36.275841  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:24:36.280760  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:24:36.280804  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:24:36.280864  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:24:36.290636  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:24:36.368481  420607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 13:24:36.438979  420607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 13:24:36.439084  420607 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:24:36.439099  420607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 13:24:36.443679  420607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 13:24:36.443685  420607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 13:24:36.443729  420607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 13:24:36.477256  420607 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 13:24:37.825156  420607 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:24:37.973801  420607 cache_images.go:92] duration metric: took 2.418005308s to LoadCachedImages
	W0127 13:24:37.973901  420607 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0127 13:24:37.973918  420607 kubeadm.go:934] updating node { 192.168.61.159 8443 v1.20.0 crio true true} ...
	I0127 13:24:37.974051  420607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-838260 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:24:37.974148  420607 ssh_runner.go:195] Run: crio config
	I0127 13:24:38.026402  420607 cni.go:84] Creating CNI manager for ""
	I0127 13:24:38.026429  420607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:24:38.026439  420607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:24:38.026459  420607 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-838260 NodeName:old-k8s-version-838260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 13:24:38.026685  420607 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-838260"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:24:38.026761  420607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 13:24:38.037407  420607 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:24:38.037473  420607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:24:38.048413  420607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 13:24:38.065462  420607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:24:38.083062  420607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 13:24:38.101073  420607 ssh_runner.go:195] Run: grep 192.168.61.159	control-plane.minikube.internal$ /etc/hosts
	I0127 13:24:38.105111  420607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:24:38.119288  420607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:24:38.255234  420607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:24:38.273168  420607 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260 for IP: 192.168.61.159
	I0127 13:24:38.273198  420607 certs.go:194] generating shared ca certs ...
	I0127 13:24:38.273221  420607 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:38.273440  420607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:24:38.273510  420607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:24:38.273524  420607 certs.go:256] generating profile certs ...
	I0127 13:24:38.273583  420607 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/client.key
	I0127 13:24:38.273607  420607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/client.crt with IP's: []
	I0127 13:24:38.395700  420607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/client.crt ...
	I0127 13:24:38.395754  420607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/client.crt: {Name:mkd9cd57a853c2423609266e0d05f12c31d16cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:38.430731  420607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/client.key ...
	I0127 13:24:38.430773  420607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/client.key: {Name:mk7834cf564158b6a066d4c40ece7b4cfcc88eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:38.430926  420607 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key.552336b8
	I0127 13:24:38.430944  420607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.crt.552336b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.159]
	I0127 13:24:38.662278  420607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.crt.552336b8 ...
	I0127 13:24:38.662309  420607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.crt.552336b8: {Name:mk5a69aae2da5a3eef8b75eefd4643a0ad632fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:38.663347  420607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key.552336b8 ...
	I0127 13:24:38.663382  420607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key.552336b8: {Name:mkd75dfacab1f989b7472e022f662bf5b4df17f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:38.663528  420607 certs.go:381] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.crt.552336b8 -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.crt
	I0127 13:24:38.663647  420607 certs.go:385] copying /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key.552336b8 -> /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key
	I0127 13:24:38.663732  420607 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.key
	I0127 13:24:38.663755  420607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.crt with IP's: []
	I0127 13:24:38.949813  420607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.crt ...
	I0127 13:24:38.949864  420607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.crt: {Name:mkf81c2270b9dd311d517b85bc1a486a3408d195 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:38.950067  420607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.key ...
	I0127 13:24:38.950089  420607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.key: {Name:mkcd18a1500a5437e1d6e150e3ca9596dad63049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:24:38.950328  420607 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:24:38.950381  420607 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:24:38.950403  420607 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:24:38.950440  420607 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:24:38.950472  420607 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:24:38.950503  420607 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:24:38.950571  420607 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:24:38.951385  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:24:38.986336  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:24:39.015856  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:24:39.048577  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:24:39.077740  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 13:24:39.108380  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:24:39.149177  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:24:39.178232  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:24:39.218281  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:24:39.269496  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:24:39.293209  420607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:24:39.316812  420607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:24:39.332965  420607 ssh_runner.go:195] Run: openssl version
	I0127 13:24:39.338784  420607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:24:39.349476  420607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:24:39.353924  420607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:24:39.353971  420607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:24:39.359752  420607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:24:39.370264  420607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:24:39.380886  420607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:24:39.385372  420607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:24:39.385416  420607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:24:39.391910  420607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:24:39.403525  420607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:24:39.418473  420607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:39.424137  420607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:39.424196  420607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:24:39.431449  420607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:24:39.444854  420607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:24:39.448927  420607 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 13:24:39.448984  420607 kubeadm.go:392] StartCluster: {Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:24:39.449059  420607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:24:39.449129  420607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:24:39.490600  420607 cri.go:89] found id: ""
	I0127 13:24:39.490669  420607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:24:39.501148  420607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:24:39.511231  420607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:24:39.521053  420607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:24:39.521077  420607 kubeadm.go:157] found existing configuration files:
	
	I0127 13:24:39.521124  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:24:39.531794  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:24:39.531840  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:24:39.541111  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:24:39.551577  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:24:39.551637  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:24:39.561428  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:24:39.570990  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:24:39.571050  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:24:39.583057  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:24:39.596421  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:24:39.596482  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:24:39.610366  420607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:24:39.748723  420607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:24:39.748870  420607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:24:39.924239  420607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:24:39.924396  420607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:24:39.924534  420607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:24:40.152463  420607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:24:40.154382  420607 out.go:235]   - Generating certificates and keys ...
	I0127 13:24:40.154495  420607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:24:40.154603  420607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:24:40.287249  420607 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 13:24:40.591181  420607 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 13:24:40.956998  420607 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 13:24:41.128379  420607 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 13:24:41.262960  420607 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 13:24:41.263169  420607 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-838260] and IPs [192.168.61.159 127.0.0.1 ::1]
	I0127 13:24:41.309383  420607 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 13:24:41.309690  420607 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-838260] and IPs [192.168.61.159 127.0.0.1 ::1]
	I0127 13:24:41.379630  420607 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 13:24:41.628240  420607 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 13:24:41.808916  420607 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 13:24:41.809272  420607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:24:42.421944  420607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:24:42.788497  420607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:24:42.947115  420607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:24:43.022801  420607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:24:43.038375  420607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:24:43.039493  420607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:24:43.039551  420607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:24:43.175108  420607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:24:43.176558  420607 out.go:235]   - Booting up control plane ...
	I0127 13:24:43.176703  420607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:24:43.188671  420607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:24:43.189774  420607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:24:43.190598  420607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:24:43.200121  420607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:25:23.195671  420607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:25:23.196658  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:25:23.196938  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:25:28.196907  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:25:28.197132  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:25:38.196573  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:25:38.196910  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:25:58.196264  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:25:58.196655  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:26:38.199169  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:26:38.199359  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:26:38.199390  420607 kubeadm.go:310] 
	I0127 13:26:38.199479  420607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:26:38.199552  420607 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:26:38.199567  420607 kubeadm.go:310] 
	I0127 13:26:38.199610  420607 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:26:38.199663  420607 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:26:38.199810  420607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:26:38.199828  420607 kubeadm.go:310] 
	I0127 13:26:38.199965  420607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:26:38.200003  420607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:26:38.200053  420607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:26:38.200063  420607 kubeadm.go:310] 
	I0127 13:26:38.200213  420607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:26:38.200353  420607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:26:38.200369  420607 kubeadm.go:310] 
	I0127 13:26:38.200498  420607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:26:38.200619  420607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:26:38.200749  420607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:26:38.200860  420607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:26:38.200874  420607 kubeadm.go:310] 
	I0127 13:26:38.201457  420607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:26:38.201578  420607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:26:38.201707  420607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 13:26:38.201871  420607 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-838260] and IPs [192.168.61.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-838260] and IPs [192.168.61.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-838260] and IPs [192.168.61.159 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-838260] and IPs [192.168.61.159 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:26:38.201917  420607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:26:39.300940  420607 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.098984276s)
	I0127 13:26:39.301047  420607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:26:39.322204  420607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:26:39.336675  420607 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:26:39.336701  420607 kubeadm.go:157] found existing configuration files:
	
	I0127 13:26:39.336753  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:26:39.350557  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:26:39.350635  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:26:39.365256  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:26:39.378984  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:26:39.379063  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:26:39.393176  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:26:39.406715  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:26:39.406784  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:26:39.419521  420607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:26:39.430726  420607 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:26:39.430802  420607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:26:39.444623  420607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:26:39.533693  420607 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:26:39.533786  420607 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:26:39.677894  420607 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:26:39.678055  420607 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:26:39.678223  420607 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:26:39.868220  420607 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:26:39.870791  420607 out.go:235]   - Generating certificates and keys ...
	I0127 13:26:39.870868  420607 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:26:39.870955  420607 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:26:39.871065  420607 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:26:39.871157  420607 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:26:39.871253  420607 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:26:39.871304  420607 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:26:39.871361  420607 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:26:39.871413  420607 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:26:39.871521  420607 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:26:39.871641  420607 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:26:39.871695  420607 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:26:39.871776  420607 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:26:40.041473  420607 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:26:40.087320  420607 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:26:40.211153  420607 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:26:40.477182  420607 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:26:40.491843  420607 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:26:40.492891  420607 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:26:40.492994  420607 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:26:40.620119  420607 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:26:40.621843  420607 out.go:235]   - Booting up control plane ...
	I0127 13:26:40.621935  420607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:26:40.632499  420607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:26:40.634566  420607 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:26:40.637801  420607 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:26:40.641329  420607 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:27:20.644434  420607 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:27:20.644812  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:27:20.645056  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:27:25.645734  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:27:25.645958  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:27:35.646618  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:27:35.646873  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:27:55.646176  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:27:55.646498  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:28:35.646080  420607 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:28:35.646367  420607 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:28:35.646382  420607 kubeadm.go:310] 
	I0127 13:28:35.646432  420607 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:28:35.646522  420607 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:28:35.646571  420607 kubeadm.go:310] 
	I0127 13:28:35.646625  420607 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:28:35.646676  420607 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:28:35.646818  420607 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:28:35.646829  420607 kubeadm.go:310] 
	I0127 13:28:35.646976  420607 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:28:35.647023  420607 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:28:35.647068  420607 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:28:35.647078  420607 kubeadm.go:310] 
	I0127 13:28:35.647217  420607 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:28:35.647356  420607 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:28:35.647377  420607 kubeadm.go:310] 
	I0127 13:28:35.647550  420607 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:28:35.647686  420607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:28:35.647792  420607 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:28:35.647886  420607 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:28:35.647898  420607 kubeadm.go:310] 
	I0127 13:28:35.648510  420607 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:28:35.648644  420607 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:28:35.648804  420607 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:28:35.648820  420607 kubeadm.go:394] duration metric: took 3m56.199840701s to StartCluster
	I0127 13:28:35.648872  420607 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:28:35.648929  420607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:28:35.702724  420607 cri.go:89] found id: ""
	I0127 13:28:35.702753  420607 logs.go:282] 0 containers: []
	W0127 13:28:35.702764  420607 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:28:35.702776  420607 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:28:35.702841  420607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:28:35.745198  420607 cri.go:89] found id: ""
	I0127 13:28:35.745227  420607 logs.go:282] 0 containers: []
	W0127 13:28:35.745238  420607 logs.go:284] No container was found matching "etcd"
	I0127 13:28:35.745245  420607 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:28:35.745299  420607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:28:35.785520  420607 cri.go:89] found id: ""
	I0127 13:28:35.785551  420607 logs.go:282] 0 containers: []
	W0127 13:28:35.785562  420607 logs.go:284] No container was found matching "coredns"
	I0127 13:28:35.785570  420607 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:28:35.785631  420607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:28:35.827633  420607 cri.go:89] found id: ""
	I0127 13:28:35.827699  420607 logs.go:282] 0 containers: []
	W0127 13:28:35.827746  420607 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:28:35.827761  420607 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:28:35.827837  420607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:28:35.874126  420607 cri.go:89] found id: ""
	I0127 13:28:35.874166  420607 logs.go:282] 0 containers: []
	W0127 13:28:35.874180  420607 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:28:35.874189  420607 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:28:35.874263  420607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:28:35.923624  420607 cri.go:89] found id: ""
	I0127 13:28:35.923659  420607 logs.go:282] 0 containers: []
	W0127 13:28:35.923670  420607 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:28:35.923678  420607 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:28:35.923752  420607 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:28:35.973601  420607 cri.go:89] found id: ""
	I0127 13:28:35.973632  420607 logs.go:282] 0 containers: []
	W0127 13:28:35.973645  420607 logs.go:284] No container was found matching "kindnet"
	I0127 13:28:35.973660  420607 logs.go:123] Gathering logs for container status ...
	I0127 13:28:35.973680  420607 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:28:36.047818  420607 logs.go:123] Gathering logs for kubelet ...
	I0127 13:28:36.047861  420607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:28:36.105676  420607 logs.go:123] Gathering logs for dmesg ...
	I0127 13:28:36.105713  420607 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:28:36.120263  420607 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:28:36.120299  420607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:28:36.256772  420607 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:28:36.256802  420607 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:28:36.256819  420607 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 13:28:36.363299  420607 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:28:36.363399  420607 out.go:270] * 
	* 
	W0127 13:28:36.363476  420607 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:28:36.363494  420607 out.go:270] * 
	* 
	W0127 13:28:36.364268  420607 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:28:36.735787  420607 out.go:201] 
	W0127 13:28:37.020719  420607 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:28:37.020782  420607 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:28:37.020821  420607 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:28:37.058152  420607 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-838260 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 6 (254.165997ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 13:28:37.488642  426446 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-838260" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-838260" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1600.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-563155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 13:27:34.899047  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.248443  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.254919  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.266326  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.287720  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.329281  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.410866  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.572540  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:39.894320  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:40.535684  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:41.816982  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:44.379264  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:49.501265  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:55.381205  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:59.743141  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:02.988267  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-563155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m38.292767111s)

                                                
                                                
-- stdout --
	* [no-preload-563155] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-563155" primary control-plane node in "no-preload-563155" cluster
	* Restarting existing kvm2 VM for "no-preload-563155" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-563155 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:27:27.708059  425710 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:27:27.708196  425710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:27:27.708205  425710 out.go:358] Setting ErrFile to fd 2...
	I0127 13:27:27.708209  425710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:27:27.708363  425710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:27:27.708888  425710 out.go:352] Setting JSON to false
	I0127 13:27:27.709846  425710 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22188,"bootTime":1737962260,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:27:27.709958  425710 start.go:139] virtualization: kvm guest
	I0127 13:27:27.712173  425710 out.go:177] * [no-preload-563155] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:27:27.713530  425710 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:27:27.713550  425710 notify.go:220] Checking for updates...
	I0127 13:27:27.715938  425710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:27:27.717079  425710 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:27:27.718195  425710 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:27:27.719305  425710 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:27:27.720360  425710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:27:27.721788  425710 config.go:182] Loaded profile config "no-preload-563155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:27:27.722142  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:27:27.722190  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:27:27.737130  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36249
	I0127 13:27:27.737491  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:27:27.738013  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:27:27.738040  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:27:27.738399  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:27:27.738661  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:27.738905  425710 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:27:27.739215  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:27:27.739268  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:27:27.753613  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0127 13:27:27.753976  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:27:27.754396  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:27:27.754418  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:27:27.754771  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:27:27.754980  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:27.789031  425710 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:27:27.790354  425710 start.go:297] selected driver: kvm2
	I0127 13:27:27.790368  425710 start.go:901] validating driver "kvm2" against &{Name:no-preload-563155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-563155 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:27:27.790490  425710 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:27:27.791221  425710 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.791301  425710 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:27:27.806617  425710 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:27:27.807000  425710 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:27:27.807050  425710 cni.go:84] Creating CNI manager for ""
	I0127 13:27:27.807116  425710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:27:27.807173  425710 start.go:340] cluster config:
	{Name:no-preload-563155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-563155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:27:27.807307  425710 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.809562  425710 out.go:177] * Starting "no-preload-563155" primary control-plane node in "no-preload-563155" cluster
	I0127 13:27:27.810784  425710 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:27:27.810916  425710 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/config.json ...
	I0127 13:27:27.811020  425710 cache.go:107] acquiring lock: {Name:mk7d1e92102b7e6fb18d4a634f3addd6af48b2d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811036  425710 cache.go:107] acquiring lock: {Name:mkfd750a97f35affdf739b05bc2c42b6047fbf5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811020  425710 cache.go:107] acquiring lock: {Name:mk116219e125c27774099e2c03185bb0f7c37793 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811098  425710 cache.go:107] acquiring lock: {Name:mk4a9d9d2e716087dc2a0ea296e67e4efd1c29aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811136  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0127 13:27:27.811159  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0127 13:27:27.811174  425710 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 80.142µs
	I0127 13:27:27.811140  425710 cache.go:107] acquiring lock: {Name:mkdd6d061b3283829e7da9e55d5d96653c6f09fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811192  425710 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0127 13:27:27.811156  425710 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 143.885µs
	I0127 13:27:27.811200  425710 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0127 13:27:27.811137  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0127 13:27:27.811197  425710 start.go:360] acquireMachinesLock for no-preload-563155: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:27:27.811209  425710 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 183.829µs
	I0127 13:27:27.811215  425710 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0127 13:27:27.811163  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 13:27:27.811207  425710 cache.go:107] acquiring lock: {Name:mke1c8b564942e430b0ead6d8281f0f620f6db6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811267  425710 start.go:364] duration metric: took 48.615µs to acquireMachinesLock for "no-preload-563155"
	I0127 13:27:27.811222  425710 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 213.311µs
	I0127 13:27:27.811315  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0127 13:27:27.811327  425710 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 13:27:27.811337  425710 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 173.488µs
	I0127 13:27:27.811345  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0127 13:27:27.811351  425710 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0127 13:27:27.811316  425710 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:27:27.811357  425710 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 249.748µs
	I0127 13:27:27.811386  425710 fix.go:54] fixHost starting: 
	I0127 13:27:27.811387  425710 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0127 13:27:27.811486  425710 cache.go:107] acquiring lock: {Name:mk58ede304638638cba0278f7d0342d70a3d8abc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811505  425710 cache.go:107] acquiring lock: {Name:mk479844eae09b7d5597bf669235b93179bda2dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:27:27.811627  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0127 13:27:27.811637  425710 cache.go:115] /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0127 13:27:27.811649  425710 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 179.337µs
	I0127 13:27:27.811647  425710 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 209.186µs
	I0127 13:27:27.811663  425710 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0127 13:27:27.811666  425710 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0127 13:27:27.811673  425710 cache.go:87] Successfully saved all images to host disk.
	I0127 13:27:27.811811  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:27:27.811844  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:27:27.825786  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0127 13:27:27.826231  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:27:27.826727  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:27:27.826745  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:27:27.827098  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:27:27.827254  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:27.827386  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetState
	I0127 13:27:27.828786  425710 fix.go:112] recreateIfNeeded on no-preload-563155: state=Stopped err=<nil>
	I0127 13:27:27.828801  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	W0127 13:27:27.828958  425710 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:27:27.830507  425710 out.go:177] * Restarting existing kvm2 VM for "no-preload-563155" ...
	I0127 13:27:27.831594  425710 main.go:141] libmachine: (no-preload-563155) Calling .Start
	I0127 13:27:27.831740  425710 main.go:141] libmachine: (no-preload-563155) starting domain...
	I0127 13:27:27.831756  425710 main.go:141] libmachine: (no-preload-563155) ensuring networks are active...
	I0127 13:27:27.832581  425710 main.go:141] libmachine: (no-preload-563155) Ensuring network default is active
	I0127 13:27:27.832853  425710 main.go:141] libmachine: (no-preload-563155) Ensuring network mk-no-preload-563155 is active
	I0127 13:27:27.833181  425710 main.go:141] libmachine: (no-preload-563155) getting domain XML...
	I0127 13:27:27.833756  425710 main.go:141] libmachine: (no-preload-563155) creating domain...
	I0127 13:27:29.026052  425710 main.go:141] libmachine: (no-preload-563155) waiting for IP...
	I0127 13:27:29.027212  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:29.027662  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:29.027743  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:29.027664  425745 retry.go:31] will retry after 246.172666ms: waiting for domain to come up
	I0127 13:27:29.275135  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:29.275736  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:29.275772  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:29.275690  425745 retry.go:31] will retry after 384.608395ms: waiting for domain to come up
	I0127 13:27:29.662192  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:29.662774  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:29.662801  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:29.662734  425745 retry.go:31] will retry after 383.506264ms: waiting for domain to come up
	I0127 13:27:30.048265  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:30.048807  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:30.048850  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:30.048771  425745 retry.go:31] will retry after 576.192328ms: waiting for domain to come up
	I0127 13:27:30.626650  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:30.627242  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:30.627281  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:30.627174  425745 retry.go:31] will retry after 498.818265ms: waiting for domain to come up
	I0127 13:27:31.127829  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:31.128419  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:31.128444  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:31.128383  425745 retry.go:31] will retry after 862.80305ms: waiting for domain to come up
	I0127 13:27:31.993291  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:31.993871  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:31.993901  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:31.993833  425745 retry.go:31] will retry after 886.721212ms: waiting for domain to come up
	I0127 13:27:32.882348  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:32.882909  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:32.882965  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:32.882903  425745 retry.go:31] will retry after 1.317298494s: waiting for domain to come up
	I0127 13:27:34.202298  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:34.202867  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:34.202898  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:34.202809  425745 retry.go:31] will retry after 1.834901199s: waiting for domain to come up
	I0127 13:27:36.039744  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:36.040251  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:36.040269  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:36.040222  425745 retry.go:31] will retry after 1.631785425s: waiting for domain to come up
	I0127 13:27:37.673239  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:37.673790  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:37.673813  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:37.673781  425745 retry.go:31] will retry after 2.441914355s: waiting for domain to come up
	I0127 13:27:40.116906  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:40.117453  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:40.117476  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:40.117428  425745 retry.go:31] will retry after 2.683589136s: waiting for domain to come up
	I0127 13:27:42.803501  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:42.804038  425710 main.go:141] libmachine: (no-preload-563155) DBG | unable to find current IP address of domain no-preload-563155 in network mk-no-preload-563155
	I0127 13:27:42.804071  425710 main.go:141] libmachine: (no-preload-563155) DBG | I0127 13:27:42.803997  425745 retry.go:31] will retry after 3.457685923s: waiting for domain to come up
	I0127 13:27:46.265840  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.266297  425710 main.go:141] libmachine: (no-preload-563155) found domain IP: 192.168.72.130
	I0127 13:27:46.266340  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has current primary IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.266349  425710 main.go:141] libmachine: (no-preload-563155) reserving static IP address...
	I0127 13:27:46.266787  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "no-preload-563155", mac: "52:54:00:85:39:5a", ip: "192.168.72.130"} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.266832  425710 main.go:141] libmachine: (no-preload-563155) DBG | skip adding static IP to network mk-no-preload-563155 - found existing host DHCP lease matching {name: "no-preload-563155", mac: "52:54:00:85:39:5a", ip: "192.168.72.130"}
	I0127 13:27:46.266847  425710 main.go:141] libmachine: (no-preload-563155) reserved static IP address 192.168.72.130 for domain no-preload-563155
	I0127 13:27:46.266864  425710 main.go:141] libmachine: (no-preload-563155) waiting for SSH...
	I0127 13:27:46.266879  425710 main.go:141] libmachine: (no-preload-563155) DBG | Getting to WaitForSSH function...
	I0127 13:27:46.268916  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.269196  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.269227  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.269317  425710 main.go:141] libmachine: (no-preload-563155) DBG | Using SSH client type: external
	I0127 13:27:46.269349  425710 main.go:141] libmachine: (no-preload-563155) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa (-rw-------)
	I0127 13:27:46.269394  425710 main.go:141] libmachine: (no-preload-563155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:27:46.269407  425710 main.go:141] libmachine: (no-preload-563155) DBG | About to run SSH command:
	I0127 13:27:46.269419  425710 main.go:141] libmachine: (no-preload-563155) DBG | exit 0
	I0127 13:27:46.398430  425710 main.go:141] libmachine: (no-preload-563155) DBG | SSH cmd err, output: <nil>: 
	I0127 13:27:46.398824  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetConfigRaw
	I0127 13:27:46.399558  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetIP
	I0127 13:27:46.401872  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.402255  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.402294  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.402483  425710 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/config.json ...
	I0127 13:27:46.402704  425710 machine.go:93] provisionDockerMachine start ...
	I0127 13:27:46.402725  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:46.403055  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:46.405469  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.405857  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.405887  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.406068  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:46.406259  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:46.406448  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:46.406619  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:46.406772  425710 main.go:141] libmachine: Using SSH client type: native
	I0127 13:27:46.406958  425710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0127 13:27:46.406969  425710 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:27:46.518759  425710 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:27:46.518787  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetMachineName
	I0127 13:27:46.519047  425710 buildroot.go:166] provisioning hostname "no-preload-563155"
	I0127 13:27:46.519070  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetMachineName
	I0127 13:27:46.519257  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:46.521340  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.521659  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.521688  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.521829  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:46.521998  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:46.522172  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:46.522321  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:46.522480  425710 main.go:141] libmachine: Using SSH client type: native
	I0127 13:27:46.522683  425710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0127 13:27:46.522697  425710 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-563155 && echo "no-preload-563155" | sudo tee /etc/hostname
	I0127 13:27:46.649504  425710 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-563155
	
	I0127 13:27:46.649542  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:46.652060  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.652399  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.652424  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.652651  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:46.652824  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:46.652988  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:46.653133  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:46.653312  425710 main.go:141] libmachine: Using SSH client type: native
	I0127 13:27:46.653480  425710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0127 13:27:46.653495  425710 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-563155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-563155/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-563155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:27:46.770584  425710 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:27:46.770620  425710 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:27:46.770650  425710 buildroot.go:174] setting up certificates
	I0127 13:27:46.770662  425710 provision.go:84] configureAuth start
	I0127 13:27:46.770674  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetMachineName
	I0127 13:27:46.770932  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetIP
	I0127 13:27:46.773615  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.773967  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.773998  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.774144  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:46.776053  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.776420  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.776461  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.776574  425710 provision.go:143] copyHostCerts
	I0127 13:27:46.776632  425710 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:27:46.776643  425710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:27:46.776708  425710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:27:46.776822  425710 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:27:46.776833  425710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:27:46.776858  425710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:27:46.776923  425710 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:27:46.776930  425710 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:27:46.776950  425710 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:27:46.777011  425710 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.no-preload-563155 san=[127.0.0.1 192.168.72.130 localhost minikube no-preload-563155]
	I0127 13:27:46.905910  425710 provision.go:177] copyRemoteCerts
	I0127 13:27:46.905972  425710 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:27:46.906014  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:46.908660  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.908931  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:46.908965  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:46.909124  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:46.909321  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:46.909468  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:46.909579  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:27:46.996327  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:27:47.020563  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:27:47.044098  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:27:47.067299  425710 provision.go:87] duration metric: took 296.623093ms to configureAuth
	I0127 13:27:47.067330  425710 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:27:47.067516  425710 config.go:182] Loaded profile config "no-preload-563155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:27:47.067611  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:47.070398  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.070817  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:47.070850  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.071010  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:47.071213  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:47.071379  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:47.071519  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:47.071680  425710 main.go:141] libmachine: Using SSH client type: native
	I0127 13:27:47.071855  425710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0127 13:27:47.071873  425710 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:27:47.299912  425710 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:27:47.299954  425710 machine.go:96] duration metric: took 897.232801ms to provisionDockerMachine
	I0127 13:27:47.299972  425710 start.go:293] postStartSetup for "no-preload-563155" (driver="kvm2")
	I0127 13:27:47.299985  425710 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:27:47.300017  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:47.300385  425710 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:27:47.300428  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:47.303151  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.303478  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:47.303505  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.303690  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:47.303856  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:47.304017  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:47.304162  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:27:47.388582  425710 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:27:47.392931  425710 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:27:47.392959  425710 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:27:47.393037  425710 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:27:47.393156  425710 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:27:47.393255  425710 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:27:47.402807  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:27:47.426729  425710 start.go:296] duration metric: took 126.743361ms for postStartSetup
	I0127 13:27:47.426764  425710 fix.go:56] duration metric: took 19.615378589s for fixHost
	I0127 13:27:47.426790  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:47.429254  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.429626  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:47.429655  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.429801  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:47.429997  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:47.430143  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:47.430293  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:47.430426  425710 main.go:141] libmachine: Using SSH client type: native
	I0127 13:27:47.430630  425710 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0127 13:27:47.430644  425710 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:27:47.538879  425710 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984467.498568018
	
	I0127 13:27:47.538907  425710 fix.go:216] guest clock: 1737984467.498568018
	I0127 13:27:47.538916  425710 fix.go:229] Guest: 2025-01-27 13:27:47.498568018 +0000 UTC Remote: 2025-01-27 13:27:47.426769053 +0000 UTC m=+19.755675326 (delta=71.798965ms)
	I0127 13:27:47.538947  425710 fix.go:200] guest clock delta is within tolerance: 71.798965ms
	I0127 13:27:47.538953  425710 start.go:83] releasing machines lock for "no-preload-563155", held for 19.72766655s
	I0127 13:27:47.538971  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:47.539206  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetIP
	I0127 13:27:47.541671  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.542044  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:47.542068  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.542241  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:47.542764  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:47.542930  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:27:47.543015  425710 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:27:47.543086  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:47.543185  425710 ssh_runner.go:195] Run: cat /version.json
	I0127 13:27:47.543214  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:27:47.545668  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.545987  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:47.546019  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.546055  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.546096  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:47.546285  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:47.546430  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:47.546498  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:47.546520  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:47.546531  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:27:47.546741  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:27:47.546929  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:27:47.547107  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:27:47.547261  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:27:47.649560  425710 ssh_runner.go:195] Run: systemctl --version
	I0127 13:27:47.655419  425710 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:27:47.803666  425710 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:27:47.810104  425710 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:27:47.810174  425710 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:27:47.826564  425710 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:27:47.826589  425710 start.go:495] detecting cgroup driver to use...
	I0127 13:27:47.826654  425710 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:27:47.845295  425710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:27:47.860891  425710 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:27:47.860945  425710 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:27:47.876137  425710 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:27:47.891527  425710 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:27:48.006444  425710 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:27:48.169751  425710 docker.go:233] disabling docker service ...
	I0127 13:27:48.169834  425710 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:27:48.183709  425710 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:27:48.195865  425710 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:27:48.312710  425710 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:27:48.419187  425710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:27:48.433653  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:27:48.451535  425710 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:27:48.451589  425710 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:27:48.461284  425710 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:27:48.461332  425710 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:27:48.470925  425710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:27:48.480619  425710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:27:48.490085  425710 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:27:48.500084  425710 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:27:48.509865  425710 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:27:48.526397  425710 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:27:48.535994  425710 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:27:48.544996  425710 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:27:48.545034  425710 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:27:48.557235  425710 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:27:48.566371  425710 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:27:48.677713  425710 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:27:48.769090  425710 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:27:48.769183  425710 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:27:48.773970  425710 start.go:563] Will wait 60s for crictl version
	I0127 13:27:48.774027  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:48.777849  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:27:48.820271  425710 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:27:48.820355  425710 ssh_runner.go:195] Run: crio --version
	I0127 13:27:48.852172  425710 ssh_runner.go:195] Run: crio --version
	I0127 13:27:48.885318  425710 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:27:48.886584  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetIP
	I0127 13:27:48.888965  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:48.889287  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:27:48.889331  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:27:48.889498  425710 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0127 13:27:48.893658  425710 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:27:48.912522  425710 kubeadm.go:883] updating cluster {Name:no-preload-563155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-563155 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:27:48.912695  425710 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:27:48.912754  425710 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:27:48.952051  425710 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:27:48.952081  425710 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.1 registry.k8s.io/kube-controller-manager:v1.32.1 registry.k8s.io/kube-scheduler:v1.32.1 registry.k8s.io/kube-proxy:v1.32.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 13:27:48.952204  425710 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0127 13:27:48.952169  425710 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 13:27:48.952276  425710 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 13:27:48.952296  425710 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 13:27:48.952332  425710 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0127 13:27:48.952175  425710 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:27:48.952352  425710 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 13:27:48.952186  425710 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 13:27:48.953979  425710 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 13:27:48.953979  425710 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 13:27:48.953993  425710 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0127 13:27:48.954002  425710 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:27:48.953980  425710 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 13:27:48.953981  425710 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 13:27:48.953985  425710 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 13:27:48.953989  425710 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0127 13:27:49.096516  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.1
	I0127 13:27:49.097973  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0127 13:27:49.142391  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 13:27:49.151173  425710 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.1" does not exist at hash "2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1" in container runtime
	I0127 13:27:49.151232  425710 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.1
	I0127 13:27:49.151276  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:49.151171  425710 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0127 13:27:49.151303  425710 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0127 13:27:49.151335  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:49.164920  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.1
	I0127 13:27:49.192483  425710 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.1" does not exist at hash "019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35" in container runtime
	I0127 13:27:49.192528  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 13:27:49.192537  425710 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 13:27:49.192568  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:49.192610  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 13:27:49.206085  425710 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.1" needs transfer: "registry.k8s.io/kube-proxy:v1.32.1" does not exist at hash "e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a" in container runtime
	I0127 13:27:49.206125  425710 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.1
	I0127 13:27:49.206159  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:49.256605  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 13:27:49.256637  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 13:27:49.256667  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 13:27:49.256682  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 13:27:49.265606  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.1
	I0127 13:27:49.266491  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0127 13:27:49.274639  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0127 13:27:49.322405  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 13:27:49.423809  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0127 13:27:49.423809  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.1
	I0127 13:27:49.423881  425710 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.1" does not exist at hash "95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a" in container runtime
	I0127 13:27:49.423888  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 13:27:49.423914  425710 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.1
	I0127 13:27:49.423942  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:49.557250  425710 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0127 13:27:49.557291  425710 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0127 13:27:49.557330  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:49.557357  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.1
	I0127 13:27:49.557429  425710 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0127 13:27:49.557451  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 13:27:49.557463  425710 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0127 13:27:49.557518  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 13:27:49.557551  425710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 13:27:49.557519  425710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0127 13:27:49.570609  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 13:27:49.630230  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 13:27:49.630288  425710 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0127 13:27:49.630412  425710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 13:27:49.636497  425710 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0127 13:27:49.636547  425710 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0127 13:27:49.636564  425710 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0127 13:27:49.636575  425710 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.1 (exists)
	I0127 13:27:49.636598  425710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 13:27:49.636613  425710 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0127 13:27:49.663661  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 13:27:49.686989  425710 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.1 (exists)
	I0127 13:27:49.687022  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1
	I0127 13:27:49.687036  425710 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.1 (exists)
	I0127 13:27:51.209161  425710 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:27:51.649242  425710 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.012606982s)
	I0127 13:27:51.649272  425710 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0127 13:27:51.649317  425710 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 13:27:51.649379  425710 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.1: (1.962333506s)
	I0127 13:27:51.649402  425710 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1
	I0127 13:27:51.649326  425710 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0: (1.985631333s)
	I0127 13:27:51.649434  425710 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0127 13:27:51.649469  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0127 13:27:51.649475  425710 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0127 13:27:51.649515  425710 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:27:51.649527  425710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 13:27:51.649560  425710 ssh_runner.go:195] Run: which crictl
	I0127 13:27:51.712196  425710 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0127 13:27:51.712328  425710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0127 13:27:53.632151  425710 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.1: (1.982722284s)
	I0127 13:27:53.632192  425710 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 from cache
	I0127 13:27:53.632219  425710 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 13:27:53.632221  425710 ssh_runner.go:235] Completed: which crictl: (1.982641844s)
	I0127 13:27:53.632270  425710 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1
	I0127 13:27:53.632187  425710 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.1: (1.982635985s)
	I0127 13:27:53.632284  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:27:53.632307  425710 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.1 (exists)
	I0127 13:27:53.632341  425710 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0: (1.919992403s)
	I0127 13:27:53.632381  425710 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0127 13:27:55.813110  425710 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.1: (2.180808589s)
	I0127 13:27:55.813157  425710 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 from cache
	I0127 13:27:55.813174  425710 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.180865832s)
	I0127 13:27:55.813252  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:27:55.813183  425710 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 13:27:55.813332  425710 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1
	I0127 13:27:55.858173  425710 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:27:57.687276  425710 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.1: (1.873910401s)
	I0127 13:27:57.687308  425710 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 from cache
	I0127 13:27:57.687310  425710 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.829092116s)
	I0127 13:27:57.687348  425710 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 13:27:57.687368  425710 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0127 13:27:57.687405  425710 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1
	I0127 13:27:57.687475  425710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0127 13:27:59.656516  425710 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.9690103s)
	I0127 13:27:59.656559  425710 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0127 13:27:59.656697  425710 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.1: (1.969264101s)
	I0127 13:27:59.656727  425710 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 from cache
	I0127 13:27:59.656759  425710 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0127 13:27:59.656816  425710 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0127 13:28:03.162419  425710 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.50556418s)
	I0127 13:28:03.162467  425710 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0127 13:28:03.162499  425710 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0127 13:28:03.162594  425710 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0127 13:28:04.017350  425710 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0127 13:28:04.017409  425710 cache_images.go:123] Successfully loaded all cached images
	I0127 13:28:04.017417  425710 cache_images.go:92] duration metric: took 15.065321178s to LoadCachedImages
	I0127 13:28:04.017440  425710 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.32.1 crio true true} ...
	I0127 13:28:04.017568  425710 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-563155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:no-preload-563155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:28:04.017635  425710 ssh_runner.go:195] Run: crio config
	I0127 13:28:04.075810  425710 cni.go:84] Creating CNI manager for ""
	I0127 13:28:04.075841  425710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:28:04.075854  425710 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:28:04.075877  425710 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-563155 NodeName:no-preload-563155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:28:04.076007  425710 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-563155"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.130"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:28:04.076082  425710 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:28:04.086089  425710 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:28:04.086149  425710 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:28:04.095636  425710 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 13:28:04.111816  425710 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:28:04.127679  425710 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0127 13:28:04.144256  425710 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0127 13:28:04.148180  425710 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:28:04.159919  425710 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:28:04.290790  425710 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:28:04.308131  425710 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155 for IP: 192.168.72.130
	I0127 13:28:04.308160  425710 certs.go:194] generating shared ca certs ...
	I0127 13:28:04.308183  425710 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:28:04.308414  425710 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:28:04.308482  425710 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:28:04.308498  425710 certs.go:256] generating profile certs ...
	I0127 13:28:04.308620  425710 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/client.key
	I0127 13:28:04.308702  425710 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/apiserver.key.554377bd
	I0127 13:28:04.308752  425710 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/proxy-client.key
	I0127 13:28:04.308907  425710 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:28:04.308950  425710 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:28:04.308964  425710 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:28:04.308997  425710 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:28:04.309028  425710 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:28:04.309054  425710 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:28:04.309094  425710 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:28:04.309944  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:28:04.347920  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:28:04.381383  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:28:04.409556  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:28:04.455360  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:28:04.487074  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:28:04.521459  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:28:04.545842  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/no-preload-563155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 13:28:04.569277  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:28:04.595995  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:28:04.623049  425710 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:28:04.646576  425710 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:28:04.662659  425710 ssh_runner.go:195] Run: openssl version
	I0127 13:28:04.668306  425710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:28:04.678385  425710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:28:04.682878  425710 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:28:04.682920  425710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:28:04.688484  425710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:28:04.698520  425710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:28:04.708533  425710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:28:04.712986  425710 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:28:04.713031  425710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:28:04.718469  425710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:28:04.728481  425710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:28:04.739487  425710 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:28:04.743851  425710 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:28:04.743885  425710 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:28:04.749527  425710 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:28:04.759622  425710 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:28:04.764304  425710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:28:04.769990  425710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:28:04.775777  425710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:28:04.781606  425710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:28:04.787253  425710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:28:04.792968  425710 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:28:04.798522  425710 kubeadm.go:392] StartCluster: {Name:no-preload-563155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-563155 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:04.798643  425710 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:28:04.798690  425710 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:28:04.836611  425710 cri.go:89] found id: ""
	I0127 13:28:04.836689  425710 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:28:04.846308  425710 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:28:04.846327  425710 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:28:04.846371  425710 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:28:04.855590  425710 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:28:04.856752  425710 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-563155" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:28:04.857391  425710 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-563155" cluster setting kubeconfig missing "no-preload-563155" context setting]
	I0127 13:28:04.858302  425710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:28:04.860246  425710 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:28:04.869228  425710 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I0127 13:28:04.869253  425710 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:28:04.869264  425710 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:28:04.869312  425710 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:28:04.907994  425710 cri.go:89] found id: ""
	I0127 13:28:04.908069  425710 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:28:04.924907  425710 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:28:04.934435  425710 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:28:04.934455  425710 kubeadm.go:157] found existing configuration files:
	
	I0127 13:28:04.934501  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:28:04.943496  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:28:04.943558  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:28:04.952564  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:28:04.961590  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:28:04.961649  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:28:04.970601  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:28:04.979366  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:28:04.979425  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:28:04.988329  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:28:04.996909  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:28:04.996961  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:28:05.005930  425710 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:28:05.014977  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:05.119775  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:05.887459  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:06.082997  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:06.156459  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:06.281732  425710 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:28:06.281845  425710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:28:06.782411  425710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:28:07.282653  425710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:28:07.296368  425710 api_server.go:72] duration metric: took 1.014637658s to wait for apiserver process to appear ...
	I0127 13:28:07.296403  425710 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:28:07.296439  425710 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0127 13:28:10.119824  425710 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:28:10.119847  425710 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:28:10.119861  425710 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0127 13:28:10.184598  425710 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:28:10.184630  425710 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:28:10.296945  425710 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0127 13:28:10.307189  425710 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:28:10.307224  425710 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:28:10.796597  425710 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0127 13:28:10.802223  425710 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:28:10.802255  425710 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:28:11.296793  425710 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0127 13:28:11.302047  425710 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:28:11.302079  425710 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:28:11.796623  425710 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0127 13:28:11.803021  425710 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0127 13:28:11.810137  425710 api_server.go:141] control plane version: v1.32.1
	I0127 13:28:11.810164  425710 api_server.go:131] duration metric: took 4.513753208s to wait for apiserver health ...
	I0127 13:28:11.810174  425710 cni.go:84] Creating CNI manager for ""
	I0127 13:28:11.810180  425710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:28:11.811964  425710 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:28:11.813191  425710 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:28:11.824677  425710 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:28:11.845467  425710 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:28:11.857751  425710 system_pods.go:59] 8 kube-system pods found
	I0127 13:28:11.857794  425710 system_pods.go:61] "coredns-668d6bf9bc-s6vq4" [e88b8ca4-8294-47dd-bf03-f7f2647391b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:28:11.857806  425710 system_pods.go:61] "etcd-no-preload-563155" [7c3a5871-e1a4-40a6-a325-bc54f48ab037] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:28:11.857818  425710 system_pods.go:61] "kube-apiserver-no-preload-563155" [11d5dbd4-8f1b-4c01-8cea-3481bdcdc063] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:28:11.857836  425710 system_pods.go:61] "kube-controller-manager-no-preload-563155" [e352c752-f893-432c-bae0-3e9f650428fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:28:11.857843  425710 system_pods.go:61] "kube-proxy-bchfc" [1ff0e6a0-7eb3-4eac-87d3-62cb6a6ed53a] Running
	I0127 13:28:11.857852  425710 system_pods.go:61] "kube-scheduler-no-preload-563155" [a4531584-30b1-42e3-9740-52dc7ca0e0ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:28:11.857859  425710 system_pods.go:61] "metrics-server-f79f97bbb-n95n7" [7ada02b7-431f-42c6-a06e-01cc5fed3980] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:28:11.857870  425710 system_pods.go:61] "storage-provisioner" [39e1a724-72ba-47e8-ab63-404a35c66e9f] Running
	I0127 13:28:11.857878  425710 system_pods.go:74] duration metric: took 12.395482ms to wait for pod list to return data ...
	I0127 13:28:11.857887  425710 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:28:11.861899  425710 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:28:11.861921  425710 node_conditions.go:123] node cpu capacity is 2
	I0127 13:28:11.861944  425710 node_conditions.go:105] duration metric: took 4.052823ms to run NodePressure ...
	I0127 13:28:11.861961  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:12.128335  425710 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 13:28:12.134771  425710 kubeadm.go:739] kubelet initialised
	I0127 13:28:12.134799  425710 kubeadm.go:740] duration metric: took 6.434232ms waiting for restarted kubelet to initialise ...
	I0127 13:28:12.134813  425710 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:28:12.145991  425710 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-s6vq4" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:12.152538  425710 pod_ready.go:98] node "no-preload-563155" hosting pod "coredns-668d6bf9bc-s6vq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.152561  425710 pod_ready.go:82] duration metric: took 6.537526ms for pod "coredns-668d6bf9bc-s6vq4" in "kube-system" namespace to be "Ready" ...
	E0127 13:28:12.152574  425710 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-563155" hosting pod "coredns-668d6bf9bc-s6vq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.152582  425710 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:12.158993  425710 pod_ready.go:98] node "no-preload-563155" hosting pod "etcd-no-preload-563155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.159020  425710 pod_ready.go:82] duration metric: took 6.426038ms for pod "etcd-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	E0127 13:28:12.159032  425710 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-563155" hosting pod "etcd-no-preload-563155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.159046  425710 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:12.164954  425710 pod_ready.go:98] node "no-preload-563155" hosting pod "kube-apiserver-no-preload-563155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.164984  425710 pod_ready.go:82] duration metric: took 5.928007ms for pod "kube-apiserver-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	E0127 13:28:12.164997  425710 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-563155" hosting pod "kube-apiserver-no-preload-563155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.165015  425710 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:12.248944  425710 pod_ready.go:98] node "no-preload-563155" hosting pod "kube-controller-manager-no-preload-563155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.248988  425710 pod_ready.go:82] duration metric: took 83.95598ms for pod "kube-controller-manager-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	E0127 13:28:12.249005  425710 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-563155" hosting pod "kube-controller-manager-no-preload-563155" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-563155" has status "Ready":"False"
	I0127 13:28:12.249014  425710 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bchfc" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:12.648667  425710 pod_ready.go:93] pod "kube-proxy-bchfc" in "kube-system" namespace has status "Ready":"True"
	I0127 13:28:12.648698  425710 pod_ready.go:82] duration metric: took 399.672644ms for pod "kube-proxy-bchfc" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:12.648713  425710 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:14.656443  425710 pod_ready.go:103] pod "kube-scheduler-no-preload-563155" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:17.156563  425710 pod_ready.go:103] pod "kube-scheduler-no-preload-563155" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:19.156778  425710 pod_ready.go:103] pod "kube-scheduler-no-preload-563155" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:20.656748  425710 pod_ready.go:93] pod "kube-scheduler-no-preload-563155" in "kube-system" namespace has status "Ready":"True"
	I0127 13:28:20.656776  425710 pod_ready.go:82] duration metric: took 8.008053165s for pod "kube-scheduler-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:20.656789  425710 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace to be "Ready" ...
	I0127 13:28:22.663086  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:24.663829  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:27.163011  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:29.164215  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:31.165185  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:33.664236  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:35.664735  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:37.665649  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:40.165337  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:42.664658  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:44.664775  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:47.162454  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:49.163678  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:51.164932  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:53.666220  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:56.163891  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:28:58.166131  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:00.664707  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:03.164111  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:05.165295  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:07.458489  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:09.663359  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:12.163524  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:14.662727  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:16.663212  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:19.164056  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:21.663227  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:23.663616  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:26.163591  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:28.163874  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:30.663389  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:32.664093  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:35.165017  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:37.662568  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:39.664321  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:42.163888  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:44.664443  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:47.162853  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:49.163812  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:51.662937  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:53.663096  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:55.663608  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:58.163814  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:00.663518  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:03.163023  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:05.164753  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:07.662816  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:09.663585  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:11.664268  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:14.163117  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:16.163779  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:18.663930  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:20.664205  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:23.166447  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:25.663253  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:28.164615  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:30.164777  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:32.665269  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:35.164535  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:37.663204  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:40.165146  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:42.664763  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:44.664802  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:46.666649  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:49.163545  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:51.164195  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:53.164571  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:55.664063  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:57.664905  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:00.163418  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:02.664228  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:05.164293  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:07.663270  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:10.163846  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:12.662069  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:14.663869  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:17.164001  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:19.663680  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:21.663946  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:24.163580  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:26.165789  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:28.663560  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:31.164903  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:33.662795  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:35.662897  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:37.663971  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:40.163594  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:42.163799  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:44.663795  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:47.165111  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:49.663964  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:52.164962  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:54.663186  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:56.663225  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:58.664357  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:00.664508  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:02.665025  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:04.665143  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:07.162919  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:09.165028  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:11.664295  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:14.163674  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:16.666312  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:19.163863  425710 pod_ready.go:103] pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:20.656979  425710 pod_ready.go:82] duration metric: took 4m0.000167923s for pod "metrics-server-f79f97bbb-n95n7" in "kube-system" namespace to be "Ready" ...
	E0127 13:32:20.657044  425710 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:32:20.657070  425710 pod_ready.go:39] duration metric: took 4m8.522243752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:32:20.657110  425710 kubeadm.go:597] duration metric: took 4m15.810773598s to restartPrimaryControlPlane
	W0127 13:32:20.657194  425710 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:32:20.657250  425710 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:32:48.524012  425710 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.866718927s)
	I0127 13:32:48.524101  425710 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:32:48.538802  425710 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:32:48.549291  425710 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:32:48.558904  425710 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:32:48.558921  425710 kubeadm.go:157] found existing configuration files:
	
	I0127 13:32:48.558959  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:32:48.568145  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:32:48.568193  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:32:48.577604  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:32:48.586709  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:32:48.586750  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:32:48.596041  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:32:48.605156  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:32:48.605212  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:32:48.614530  425710 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:32:48.623673  425710 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:32:48.623726  425710 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:32:48.633443  425710 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:32:48.680652  425710 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:32:48.680802  425710 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:32:48.800156  425710 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:32:48.800325  425710 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:32:48.800471  425710 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:32:48.809305  425710 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:32:48.811215  425710 out.go:235]   - Generating certificates and keys ...
	I0127 13:32:48.811332  425710 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:32:48.811429  425710 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:32:48.811541  425710 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:32:48.811647  425710 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:32:48.811771  425710 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:32:48.811865  425710 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:32:48.811996  425710 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:32:48.812105  425710 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:32:48.812214  425710 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:32:48.812335  425710 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:32:48.812402  425710 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:32:48.812475  425710 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:32:48.975649  425710 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:32:49.081857  425710 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:32:49.227086  425710 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:32:49.474513  425710 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:32:49.576965  425710 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:32:49.577557  425710 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:32:49.580076  425710 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:32:49.581633  425710 out.go:235]   - Booting up control plane ...
	I0127 13:32:49.581755  425710 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:32:49.581837  425710 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:32:49.582015  425710 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:32:49.603623  425710 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:32:49.613119  425710 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:32:49.613201  425710 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:32:49.750697  425710 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:32:49.750831  425710 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:32:50.758729  425710 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.008048071s
	I0127 13:32:50.758856  425710 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:32:55.761123  425710 kubeadm.go:310] [api-check] The API server is healthy after 5.002244011s
	I0127 13:32:55.773435  425710 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:32:55.789516  425710 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:32:55.815468  425710 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:32:55.815702  425710 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-563155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:32:55.827349  425710 kubeadm.go:310] [bootstrap-token] Using token: hurq0p.q3cjc5f17b6eubzp
	I0127 13:32:55.828518  425710 out.go:235]   - Configuring RBAC rules ...
	I0127 13:32:55.828641  425710 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:32:55.835986  425710 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:32:55.841781  425710 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:32:55.844803  425710 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:32:55.847867  425710 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:32:55.851033  425710 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:32:56.167061  425710 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:32:56.601014  425710 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:32:57.168043  425710 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:32:57.169222  425710 kubeadm.go:310] 
	I0127 13:32:57.169356  425710 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:32:57.169375  425710 kubeadm.go:310] 
	I0127 13:32:57.169481  425710 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:32:57.169502  425710 kubeadm.go:310] 
	I0127 13:32:57.169541  425710 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:32:57.169630  425710 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:32:57.169699  425710 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:32:57.169712  425710 kubeadm.go:310] 
	I0127 13:32:57.169791  425710 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:32:57.169803  425710 kubeadm.go:310] 
	I0127 13:32:57.169863  425710 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:32:57.169874  425710 kubeadm.go:310] 
	I0127 13:32:57.169941  425710 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:32:57.170040  425710 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:32:57.170131  425710 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:32:57.170145  425710 kubeadm.go:310] 
	I0127 13:32:57.170247  425710 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:32:57.170350  425710 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:32:57.170361  425710 kubeadm.go:310] 
	I0127 13:32:57.170487  425710 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hurq0p.q3cjc5f17b6eubzp \
	I0127 13:32:57.170632  425710 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 13:32:57.170704  425710 kubeadm.go:310] 	--control-plane 
	I0127 13:32:57.170732  425710 kubeadm.go:310] 
	I0127 13:32:57.170839  425710 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:32:57.170846  425710 kubeadm.go:310] 
	I0127 13:32:57.170937  425710 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hurq0p.q3cjc5f17b6eubzp \
	I0127 13:32:57.171058  425710 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 13:32:57.172306  425710 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:32:57.172337  425710 cni.go:84] Creating CNI manager for ""
	I0127 13:32:57.172349  425710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:32:57.174119  425710 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:32:57.175460  425710 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:32:57.190511  425710 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:32:57.215146  425710 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:32:57.215210  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:32:57.215241  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-563155 minikube.k8s.io/updated_at=2025_01_27T13_32_57_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=no-preload-563155 minikube.k8s.io/primary=true
	I0127 13:32:57.257655  425710 ops.go:34] apiserver oom_adj: -16
	I0127 13:32:57.432319  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:32:57.932989  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:32:58.433199  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:32:58.932771  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:32:59.432972  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:32:59.933235  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:33:00.432326  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:33:00.933351  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:33:01.433161  425710 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:33:01.568035  425710 kubeadm.go:1113] duration metric: took 4.352894186s to wait for elevateKubeSystemPrivileges
	I0127 13:33:01.568068  425710 kubeadm.go:394] duration metric: took 4m56.769554829s to StartCluster
	I0127 13:33:01.568093  425710 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:33:01.568181  425710 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:33:01.569438  425710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:33:01.569694  425710 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:33:01.569742  425710 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:33:01.569857  425710 addons.go:69] Setting storage-provisioner=true in profile "no-preload-563155"
	I0127 13:33:01.569886  425710 addons.go:238] Setting addon storage-provisioner=true in "no-preload-563155"
	W0127 13:33:01.569896  425710 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:33:01.569925  425710 config.go:182] Loaded profile config "no-preload-563155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:33:01.569949  425710 addons.go:69] Setting dashboard=true in profile "no-preload-563155"
	I0127 13:33:01.569936  425710 host.go:66] Checking if "no-preload-563155" exists ...
	I0127 13:33:01.569981  425710 addons.go:69] Setting default-storageclass=true in profile "no-preload-563155"
	I0127 13:33:01.570000  425710 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-563155"
	I0127 13:33:01.570411  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.570446  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.570475  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.569969  425710 addons.go:238] Setting addon dashboard=true in "no-preload-563155"
	I0127 13:33:01.570517  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0127 13:33:01.570556  425710 addons.go:247] addon dashboard should already be in state true
	I0127 13:33:01.570610  425710 host.go:66] Checking if "no-preload-563155" exists ...
	I0127 13:33:01.570608  425710 addons.go:69] Setting metrics-server=true in profile "no-preload-563155"
	I0127 13:33:01.570632  425710 addons.go:238] Setting addon metrics-server=true in "no-preload-563155"
	W0127 13:33:01.570641  425710 addons.go:247] addon metrics-server should already be in state true
	I0127 13:33:01.570693  425710 host.go:66] Checking if "no-preload-563155" exists ...
	I0127 13:33:01.571023  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.571083  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.571197  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.571235  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.578597  425710 out.go:177] * Verifying Kubernetes components...
	I0127 13:33:01.580176  425710 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:33:01.592160  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0127 13:33:01.592632  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.593230  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.593247  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.593583  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.593638  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I0127 13:33:01.593759  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetState
	I0127 13:33:01.594203  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.595017  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.595033  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.595447  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.595531  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35471
	I0127 13:33:01.596250  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.596302  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.596938  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.597492  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.597513  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.597980  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.598383  425710 addons.go:238] Setting addon default-storageclass=true in "no-preload-563155"
	W0127 13:33:01.598403  425710 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:33:01.598435  425710 host.go:66] Checking if "no-preload-563155" exists ...
	I0127 13:33:01.598600  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.598645  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.598810  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.598857  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.599297  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44521
	I0127 13:33:01.599759  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.600290  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.600322  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.600924  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.601391  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.601419  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.617352  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0127 13:33:01.618075  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.618636  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.618659  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.619148  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.619308  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetState
	I0127 13:33:01.619371  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0127 13:33:01.619904  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.621747  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:33:01.621920  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.621933  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.622406  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.622673  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetState
	I0127 13:33:01.623900  425710 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:33:01.624750  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:33:01.626678  425710 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:33:01.626700  425710 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:33:01.626724  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:33:01.627504  425710 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:33:01.628902  425710 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:33:01.628926  425710 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:33:01.628947  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:33:01.628903  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I0127 13:33:01.629986  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.630152  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.630410  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.630426  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.630757  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.630826  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:33:01.630847  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.631291  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:33:01.631526  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:33:01.631717  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:33:01.631904  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:33:01.632795  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I0127 13:33:01.632996  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.633435  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.633531  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:33:01.633546  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.633577  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:33:01.633713  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:33:01.633865  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:33:01.634042  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:33:01.634824  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.634840  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.635547  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.636185  425710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:33:01.636228  425710 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:33:01.638395  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetState
	I0127 13:33:01.640417  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:33:01.642350  425710 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:33:01.644072  425710 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:33:01.645307  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:33:01.645335  425710 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:33:01.645360  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:33:01.648500  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.648994  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:33:01.649018  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.649262  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:33:01.649515  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:33:01.649682  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:33:01.649802  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:33:01.672914  425710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44221
	I0127 13:33:01.673543  425710 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:33:01.674270  425710 main.go:141] libmachine: Using API Version  1
	I0127 13:33:01.674302  425710 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:33:01.674667  425710 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:33:01.674885  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetState
	I0127 13:33:01.676607  425710 main.go:141] libmachine: (no-preload-563155) Calling .DriverName
	I0127 13:33:01.676851  425710 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:33:01.676870  425710 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:33:01.676897  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHHostname
	I0127 13:33:01.680385  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.680779  425710 main.go:141] libmachine: (no-preload-563155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:39:5a", ip: ""} in network mk-no-preload-563155: {Iface:virbr4 ExpiryTime:2025-01-27 14:24:46 +0000 UTC Type:0 Mac:52:54:00:85:39:5a Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:no-preload-563155 Clientid:01:52:54:00:85:39:5a}
	I0127 13:33:01.680798  425710 main.go:141] libmachine: (no-preload-563155) DBG | domain no-preload-563155 has defined IP address 192.168.72.130 and MAC address 52:54:00:85:39:5a in network mk-no-preload-563155
	I0127 13:33:01.681043  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHPort
	I0127 13:33:01.681200  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHKeyPath
	I0127 13:33:01.681348  425710 main.go:141] libmachine: (no-preload-563155) Calling .GetSSHUsername
	I0127 13:33:01.681461  425710 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/no-preload-563155/id_rsa Username:docker}
	I0127 13:33:01.851433  425710 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:33:01.894784  425710 node_ready.go:35] waiting up to 6m0s for node "no-preload-563155" to be "Ready" ...
	I0127 13:33:01.914698  425710 node_ready.go:49] node "no-preload-563155" has status "Ready":"True"
	I0127 13:33:01.914723  425710 node_ready.go:38] duration metric: took 19.904408ms for node "no-preload-563155" to be "Ready" ...
	I0127 13:33:01.914735  425710 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:33:01.923943  425710 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-lw97c" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:01.949585  425710 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:33:01.977714  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:33:01.977743  425710 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:33:01.999388  425710 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:33:01.999419  425710 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:33:02.017963  425710 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:33:02.079960  425710 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:33:02.079990  425710 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:33:02.088274  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:33:02.088302  425710 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:33:02.195308  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:33:02.195344  425710 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:33:02.254358  425710 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:33:02.254399  425710 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:33:02.290830  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:33:02.290863  425710 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:33:02.416729  425710 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:33:02.466280  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:33:02.466316  425710 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:33:02.565360  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:33:02.565403  425710 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:33:02.641586  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:33:02.641621  425710 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:33:02.688305  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:33:02.688334  425710 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:33:02.757754  425710 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:33:02.757781  425710 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:33:02.834654  425710 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:33:03.371980  425710 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.42235173s)
	I0127 13:33:03.372035  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:03.372048  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:03.372048  425710 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.354039781s)
	I0127 13:33:03.372089  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:03.372102  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:03.374615  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:03.374638  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:03.374634  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:03.374647  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:03.374656  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:03.374615  425710 main.go:141] libmachine: (no-preload-563155) DBG | Closing plugin on server side
	I0127 13:33:03.374665  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:03.374672  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:03.374657  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:03.377750  425710 main.go:141] libmachine: (no-preload-563155) DBG | Closing plugin on server side
	I0127 13:33:03.377760  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:03.377772  425710 main.go:141] libmachine: (no-preload-563155) DBG | Closing plugin on server side
	I0127 13:33:03.377774  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:03.377808  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:03.377816  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:03.397986  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:03.398014  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:03.398410  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:03.398454  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:03.980692  425710 pod_ready.go:103] pod "coredns-668d6bf9bc-lw97c" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:04.085574  425710 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.668789602s)
	I0127 13:33:04.085647  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:04.085670  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:04.085958  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:04.085976  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:04.085985  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:04.085994  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:04.086639  425710 main.go:141] libmachine: (no-preload-563155) DBG | Closing plugin on server side
	I0127 13:33:04.086646  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:04.086666  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:04.086678  425710 addons.go:479] Verifying addon metrics-server=true in "no-preload-563155"
	I0127 13:33:04.979783  425710 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.145063239s)
	I0127 13:33:04.979848  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:04.979863  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:04.980219  425710 main.go:141] libmachine: (no-preload-563155) DBG | Closing plugin on server side
	I0127 13:33:04.980265  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:04.980283  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:04.980302  425710 main.go:141] libmachine: Making call to close driver server
	I0127 13:33:04.980312  425710 main.go:141] libmachine: (no-preload-563155) Calling .Close
	I0127 13:33:04.980572  425710 main.go:141] libmachine: (no-preload-563155) DBG | Closing plugin on server side
	I0127 13:33:04.980586  425710 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:33:04.980599  425710 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:33:04.982073  425710 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-563155 addons enable metrics-server
	
	I0127 13:33:04.983504  425710 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:33:04.984884  425710 addons.go:514] duration metric: took 3.415150679s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:33:06.435505  425710 pod_ready.go:103] pod "coredns-668d6bf9bc-lw97c" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:08.536293  425710 pod_ready.go:103] pod "coredns-668d6bf9bc-lw97c" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:10.931401  425710 pod_ready.go:103] pod "coredns-668d6bf9bc-lw97c" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:11.932281  425710 pod_ready.go:93] pod "coredns-668d6bf9bc-lw97c" in "kube-system" namespace has status "Ready":"True"
	I0127 13:33:11.932305  425710 pod_ready.go:82] duration metric: took 10.008324061s for pod "coredns-668d6bf9bc-lw97c" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.932315  425710 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xlk4d" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.936886  425710 pod_ready.go:93] pod "coredns-668d6bf9bc-xlk4d" in "kube-system" namespace has status "Ready":"True"
	I0127 13:33:11.936907  425710 pod_ready.go:82] duration metric: took 4.584652ms for pod "coredns-668d6bf9bc-xlk4d" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.936917  425710 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.941715  425710 pod_ready.go:93] pod "etcd-no-preload-563155" in "kube-system" namespace has status "Ready":"True"
	I0127 13:33:11.941732  425710 pod_ready.go:82] duration metric: took 4.80675ms for pod "etcd-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.941740  425710 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.946021  425710 pod_ready.go:93] pod "kube-apiserver-no-preload-563155" in "kube-system" namespace has status "Ready":"True"
	I0127 13:33:11.946040  425710 pod_ready.go:82] duration metric: took 4.293786ms for pod "kube-apiserver-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.946052  425710 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.951235  425710 pod_ready.go:93] pod "kube-controller-manager-no-preload-563155" in "kube-system" namespace has status "Ready":"True"
	I0127 13:33:11.951253  425710 pod_ready.go:82] duration metric: took 5.19442ms for pod "kube-controller-manager-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:11.951263  425710 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pn8rl" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:12.328228  425710 pod_ready.go:93] pod "kube-proxy-pn8rl" in "kube-system" namespace has status "Ready":"True"
	I0127 13:33:12.328260  425710 pod_ready.go:82] duration metric: took 376.988511ms for pod "kube-proxy-pn8rl" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:12.328274  425710 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:12.729067  425710 pod_ready.go:93] pod "kube-scheduler-no-preload-563155" in "kube-system" namespace has status "Ready":"True"
	I0127 13:33:12.729096  425710 pod_ready.go:82] duration metric: took 400.81461ms for pod "kube-scheduler-no-preload-563155" in "kube-system" namespace to be "Ready" ...
	I0127 13:33:12.729105  425710 pod_ready.go:39] duration metric: took 10.814358458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:33:12.729130  425710 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:33:12.729192  425710 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:12.751548  425710 api_server.go:72] duration metric: took 11.181809985s to wait for apiserver process to appear ...
	I0127 13:33:12.751576  425710 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:33:12.751595  425710 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0127 13:33:12.757480  425710 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0127 13:33:12.761326  425710 api_server.go:141] control plane version: v1.32.1
	I0127 13:33:12.761352  425710 api_server.go:131] duration metric: took 9.768205ms to wait for apiserver health ...
	I0127 13:33:12.761363  425710 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:33:12.932818  425710 system_pods.go:59] 9 kube-system pods found
	I0127 13:33:12.932849  425710 system_pods.go:61] "coredns-668d6bf9bc-lw97c" [23d851ec-de2d-4c28-8fe6-a20675a26a07] Running
	I0127 13:33:12.932854  425710 system_pods.go:61] "coredns-668d6bf9bc-xlk4d" [10841d1b-6c81-478f-81c9-e75635698866] Running
	I0127 13:33:12.932859  425710 system_pods.go:61] "etcd-no-preload-563155" [2af85f26-779b-44ad-b28b-e6100ff7339b] Running
	I0127 13:33:12.932862  425710 system_pods.go:61] "kube-apiserver-no-preload-563155" [18bcdd66-3243-49b2-bba4-d1d91f1343dc] Running
	I0127 13:33:12.932866  425710 system_pods.go:61] "kube-controller-manager-no-preload-563155" [2bbe179e-8cab-4b1c-a540-59c93d5ee6a8] Running
	I0127 13:33:12.932869  425710 system_pods.go:61] "kube-proxy-pn8rl" [56c43bf4-757d-44d6-9aa2-387a4ee10693] Running
	I0127 13:33:12.932871  425710 system_pods.go:61] "kube-scheduler-no-preload-563155" [067c4288-326e-4553-8378-ba835e8dea4a] Running
	I0127 13:33:12.932877  425710 system_pods.go:61] "metrics-server-f79f97bbb-bdcxg" [c3d5e10f-b2dc-4890-96dc-c53d634f6823] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:33:12.932881  425710 system_pods.go:61] "storage-provisioner" [4e8a0a0c-0e17-4d75-92db-82da926e7c42] Running
	I0127 13:33:12.932889  425710 system_pods.go:74] duration metric: took 171.519231ms to wait for pod list to return data ...
	I0127 13:33:12.932896  425710 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:33:13.128468  425710 default_sa.go:45] found service account: "default"
	I0127 13:33:13.128500  425710 default_sa.go:55] duration metric: took 195.59575ms for default service account to be created ...
	I0127 13:33:13.128513  425710 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:33:13.331933  425710 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-563155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-563155 -n no-preload-563155
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-563155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-563155 logs -n 25: (1.504005684s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-563155                                   | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-441438       | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-174381                 | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-174381                                  | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-838260        | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-838260             | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-441438                           | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-639843             | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-639843                  | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-639843 image list                           | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	| delete  | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:54 UTC | 27 Jan 25 13:54 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:34:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:34:50.343590  429070 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:34:50.343706  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343717  429070 out.go:358] Setting ErrFile to fd 2...
	I0127 13:34:50.343725  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343905  429070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:34:50.344540  429070 out.go:352] Setting JSON to false
	I0127 13:34:50.345553  429070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22630,"bootTime":1737962260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:34:50.345705  429070 start.go:139] virtualization: kvm guest
	I0127 13:34:50.348432  429070 out.go:177] * [newest-cni-639843] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:34:50.349607  429070 notify.go:220] Checking for updates...
	I0127 13:34:50.349639  429070 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:34:50.350877  429070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:34:50.352137  429070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:50.353523  429070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:34:50.354936  429070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:34:50.356253  429070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:34:50.358120  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:50.358577  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.358648  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.375344  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 13:34:50.375770  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.376385  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.376429  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.376809  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.377061  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.377398  429070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:34:50.377833  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.377889  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.393490  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0127 13:34:50.393954  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.394574  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.394602  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.394931  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.395175  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.432045  429070 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:34:50.433260  429070 start.go:297] selected driver: kvm2
	I0127 13:34:50.433295  429070 start.go:901] validating driver "kvm2" against &{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.433450  429070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:34:50.434521  429070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.434662  429070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:34:50.455080  429070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:34:50.455695  429070 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:34:50.455755  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:34:50.455835  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:50.455908  429070 start.go:340] cluster config:
	{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.456092  429070 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.457706  429070 out.go:177] * Starting "newest-cni-639843" primary control-plane node in "newest-cni-639843" cluster
	I0127 13:34:50.458857  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:34:50.458907  429070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:34:50.458924  429070 cache.go:56] Caching tarball of preloaded images
	I0127 13:34:50.459033  429070 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:34:50.459049  429070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:34:50.459193  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:34:50.459403  429070 start.go:360] acquireMachinesLock for newest-cni-639843: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:34:50.459457  429070 start.go:364] duration metric: took 33.893µs to acquireMachinesLock for "newest-cni-639843"
	I0127 13:34:50.459478  429070 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:34:50.459488  429070 fix.go:54] fixHost starting: 
	I0127 13:34:50.459761  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.459807  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.475245  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0127 13:34:50.475743  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.476455  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.476504  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.476932  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.477227  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.477420  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:34:50.479725  429070 fix.go:112] recreateIfNeeded on newest-cni-639843: state=Stopped err=<nil>
	I0127 13:34:50.479768  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	W0127 13:34:50.479933  429070 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:34:50.481457  429070 out.go:177] * Restarting existing kvm2 VM for "newest-cni-639843" ...
	I0127 13:34:48.302747  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:48.321834  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:48.321899  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:48.370678  427154 cri.go:89] found id: ""
	I0127 13:34:48.370716  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.370732  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:48.370741  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:48.370813  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:48.430514  427154 cri.go:89] found id: ""
	I0127 13:34:48.430655  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.430683  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:48.430702  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:48.430826  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:48.477908  427154 cri.go:89] found id: ""
	I0127 13:34:48.477941  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.477954  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:48.477962  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:48.478036  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:48.532193  427154 cri.go:89] found id: ""
	I0127 13:34:48.532230  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.532242  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:48.532250  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:48.532316  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:48.580627  427154 cri.go:89] found id: ""
	I0127 13:34:48.580658  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.580667  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:48.580673  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:48.580744  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:48.620393  427154 cri.go:89] found id: ""
	I0127 13:34:48.620428  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.620441  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:48.620449  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:48.620518  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:48.662032  427154 cri.go:89] found id: ""
	I0127 13:34:48.662071  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.662079  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:48.662097  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:48.662164  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:48.699662  427154 cri.go:89] found id: ""
	I0127 13:34:48.699697  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.699709  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:48.699723  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:48.699745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:48.752100  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:48.752134  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:48.768121  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:48.768167  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:48.838690  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:48.838718  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:48.838734  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:48.928433  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:48.928471  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:52.576263  426243 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:34:52.576356  426243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:52.576423  426243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:52.576582  426243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:52.576704  426243 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:34:52.576783  426243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:52.578299  426243 out.go:235]   - Generating certificates and keys ...
	I0127 13:34:52.578380  426243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:34:52.578439  426243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:34:52.578509  426243 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:34:52.578594  426243 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:34:52.578701  426243 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:34:52.578757  426243 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:34:52.578818  426243 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:34:52.578870  426243 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:34:52.578962  426243 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:34:52.579063  426243 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:34:52.579111  426243 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:34:52.579164  426243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:34:52.579227  426243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:34:52.579282  426243 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:34:52.579333  426243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:34:52.579387  426243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:34:52.579449  426243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:34:52.579519  426243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:34:52.579604  426243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:34:52.581730  426243 out.go:235]   - Booting up control plane ...
	I0127 13:34:52.581854  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:34:52.581961  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:34:52.582058  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:34:52.582184  426243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:34:52.582253  426243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:34:52.582290  426243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:34:52.582417  426243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:34:52.582554  426243 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:34:52.582651  426243 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002999225s
	I0127 13:34:52.582795  426243 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:34:52.582903  426243 kubeadm.go:310] [api-check] The API server is healthy after 5.501149453s
	I0127 13:34:52.583076  426243 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:34:52.583258  426243 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:34:52.583323  426243 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:34:52.583591  426243 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-174381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:34:52.583679  426243 kubeadm.go:310] [bootstrap-token] Using token: 5hn0ox.etnk5twofkqgha4f
	I0127 13:34:52.584876  426243 out.go:235]   - Configuring RBAC rules ...
	I0127 13:34:52.585016  426243 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:34:52.585138  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:34:52.585329  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:34:52.585515  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:34:52.585645  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:34:52.585730  426243 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:34:52.585829  426243 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:34:52.585867  426243 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:34:52.585911  426243 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:34:52.585917  426243 kubeadm.go:310] 
	I0127 13:34:52.585967  426243 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:34:52.585973  426243 kubeadm.go:310] 
	I0127 13:34:52.586066  426243 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:34:52.586082  426243 kubeadm.go:310] 
	I0127 13:34:52.586138  426243 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:34:52.586214  426243 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:34:52.586295  426243 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:34:52.586319  426243 kubeadm.go:310] 
	I0127 13:34:52.586416  426243 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:34:52.586463  426243 kubeadm.go:310] 
	I0127 13:34:52.586522  426243 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:34:52.586532  426243 kubeadm.go:310] 
	I0127 13:34:52.586628  426243 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:34:52.586712  426243 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:34:52.586770  426243 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:34:52.586777  426243 kubeadm.go:310] 
	I0127 13:34:52.586857  426243 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:34:52.586926  426243 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:34:52.586932  426243 kubeadm.go:310] 
	I0127 13:34:52.587010  426243 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587095  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 13:34:52.587119  426243 kubeadm.go:310] 	--control-plane 
	I0127 13:34:52.587125  426243 kubeadm.go:310] 
	I0127 13:34:52.587196  426243 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:34:52.587204  426243 kubeadm.go:310] 
	I0127 13:34:52.587272  426243 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587400  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 13:34:52.587418  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:34:52.587432  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:52.588976  426243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:34:50.482735  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Start
	I0127 13:34:50.482923  429070 main.go:141] libmachine: (newest-cni-639843) starting domain...
	I0127 13:34:50.482942  429070 main.go:141] libmachine: (newest-cni-639843) ensuring networks are active...
	I0127 13:34:50.483967  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network default is active
	I0127 13:34:50.484412  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network mk-newest-cni-639843 is active
	I0127 13:34:50.484881  429070 main.go:141] libmachine: (newest-cni-639843) getting domain XML...
	I0127 13:34:50.485667  429070 main.go:141] libmachine: (newest-cni-639843) creating domain...
	I0127 13:34:51.790885  429070 main.go:141] libmachine: (newest-cni-639843) waiting for IP...
	I0127 13:34:51.792240  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:51.793056  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:51.793082  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:51.792897  429104 retry.go:31] will retry after 310.654811ms: waiting for domain to come up
	I0127 13:34:52.105667  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.106457  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.106639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.106581  429104 retry.go:31] will retry after 280.140783ms: waiting for domain to come up
	I0127 13:34:52.388057  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.388616  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.388639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.388575  429104 retry.go:31] will retry after 317.414736ms: waiting for domain to come up
	I0127 13:34:52.708208  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.708845  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.708880  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.708795  429104 retry.go:31] will retry after 475.980482ms: waiting for domain to come up
	I0127 13:34:53.186613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.187252  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.187320  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.187240  429104 retry.go:31] will retry after 619.306112ms: waiting for domain to come up
	I0127 13:34:53.807794  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.808436  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.808485  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.808365  429104 retry.go:31] will retry after 838.158661ms: waiting for domain to come up
	I0127 13:34:54.647849  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:54.648442  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:54.648465  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:54.648411  429104 retry.go:31] will retry after 739.028542ms: waiting for domain to come up
	I0127 13:34:51.475609  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:51.489500  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:51.489579  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:51.536219  427154 cri.go:89] found id: ""
	I0127 13:34:51.536250  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.536262  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:51.536270  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:51.536334  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:51.577494  427154 cri.go:89] found id: ""
	I0127 13:34:51.577522  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.577536  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:51.577543  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:51.577606  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:51.614430  427154 cri.go:89] found id: ""
	I0127 13:34:51.614463  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.614476  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:51.614484  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:51.614602  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:51.666530  427154 cri.go:89] found id: ""
	I0127 13:34:51.666582  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.666591  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:51.666597  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:51.666653  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:51.705538  427154 cri.go:89] found id: ""
	I0127 13:34:51.705567  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.705579  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:51.705587  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:51.705645  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:51.743604  427154 cri.go:89] found id: ""
	I0127 13:34:51.743638  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.743650  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:51.743658  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:51.743721  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:51.778029  427154 cri.go:89] found id: ""
	I0127 13:34:51.778058  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.778070  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:51.778078  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:51.778148  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:51.819260  427154 cri.go:89] found id: ""
	I0127 13:34:51.819294  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.819307  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:51.819321  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:51.819338  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:51.887511  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:51.887552  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:51.904227  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:51.904261  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:51.980655  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:51.980684  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:51.980699  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:52.085922  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:52.085973  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:54.642029  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:54.655922  427154 kubeadm.go:597] duration metric: took 4m4.240008337s to restartPrimaryControlPlane
	W0127 13:34:54.656192  427154 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:34:54.656244  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:34:52.590276  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:34:52.604204  426243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:34:52.631515  426243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:34:52.631609  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:52.631702  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-174381 minikube.k8s.io/updated_at=2025_01_27T13_34_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-174381 minikube.k8s.io/primary=true
	I0127 13:34:52.663541  426243 ops.go:34] apiserver oom_adj: -16
	I0127 13:34:52.870691  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.371756  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.871386  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.371644  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.871179  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.370747  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.871458  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.371676  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.870824  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.982232  426243 kubeadm.go:1113] duration metric: took 4.350694221s to wait for elevateKubeSystemPrivileges
	I0127 13:34:56.982281  426243 kubeadm.go:394] duration metric: took 6m1.699030467s to StartCluster
	I0127 13:34:56.982314  426243 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.982426  426243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:56.983746  426243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.984032  426243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:34:56.984111  426243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:34:56.984230  426243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-174381"
	I0127 13:34:56.984249  426243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-174381"
	W0127 13:34:56.984258  426243 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:34:56.984273  426243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-174381"
	I0127 13:34:56.984292  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984300  426243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-174381"
	I0127 13:34:56.984303  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:56.984359  426243 addons.go:69] Setting dashboard=true in profile "embed-certs-174381"
	I0127 13:34:56.984372  426243 addons.go:238] Setting addon dashboard=true in "embed-certs-174381"
	W0127 13:34:56.984381  426243 addons.go:247] addon dashboard should already be in state true
	I0127 13:34:56.984405  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984450  426243 addons.go:69] Setting metrics-server=true in profile "embed-certs-174381"
	I0127 13:34:56.984484  426243 addons.go:238] Setting addon metrics-server=true in "embed-certs-174381"
	W0127 13:34:56.984494  426243 addons.go:247] addon metrics-server should already be in state true
	I0127 13:34:56.984524  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984760  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984778  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984799  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984801  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984812  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984826  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984943  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984977  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.986354  426243 out.go:177] * Verifying Kubernetes components...
	I0127 13:34:56.988314  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:34:57.003008  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0127 13:34:57.003716  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.003737  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0127 13:34:57.004011  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0127 13:34:57.004163  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004169  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0127 13:34:57.004457  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004482  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.004559  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004638  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004651  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004670  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005012  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005085  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005111  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005198  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.005324  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005340  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005955  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005969  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005970  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.006577  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006617  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.006912  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006964  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.007601  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.007633  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.009217  426243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-174381"
	W0127 13:34:57.009239  426243 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:34:57.009268  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:57.009605  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.009648  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.027242  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0127 13:34:57.027495  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0127 13:34:57.027644  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.027844  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.028181  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028198  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028301  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028318  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028539  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028633  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.028808  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.029068  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0127 13:34:57.029543  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.030162  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.030190  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.030581  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.030601  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031166  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.031207  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.031430  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031637  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0127 13:34:57.031993  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.032625  426243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:34:57.032750  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.032765  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.033302  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.033477  426243 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:34:57.033498  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.033587  426243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.033607  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:34:57.033627  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.035541  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.035761  426243 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036794  426243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036804  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:34:57.036814  426243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:34:57.036833  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.037349  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.037808  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:34:57.037827  426243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:34:57.037856  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.038015  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.038042  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.038208  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.038375  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.038561  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.038701  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.041035  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041500  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.041519  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041915  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042008  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042189  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042254  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.042272  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042592  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042583  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.042727  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042852  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.055810  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0127 13:34:57.056237  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.056772  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.056801  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.057165  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.057501  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.059165  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.059398  426243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.059418  426243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:34:57.059437  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.062703  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063236  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.063266  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063369  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.063544  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.063694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.063831  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.242347  426243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:34:57.326178  426243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352801  426243 node_ready.go:49] node "embed-certs-174381" has status "Ready":"True"
	I0127 13:34:57.352828  426243 node_ready.go:38] duration metric: took 26.613856ms for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352841  426243 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:34:57.368293  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:57.372941  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:34:57.372962  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:34:57.391676  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.418587  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:34:57.418616  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:34:57.446588  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.460844  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:34:57.460869  426243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:34:57.507947  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:34:57.507976  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:34:57.542669  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.542701  426243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:34:57.630641  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:34:57.630672  426243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:34:57.639506  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.693463  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:34:57.693498  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:34:57.806045  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:34:57.806082  426243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:34:57.930058  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:34:57.930101  426243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:34:58.055263  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:34:58.055295  426243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:34:58.110576  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:34:58.110609  426243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:34:58.202270  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.202305  426243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:34:58.293311  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.514356  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067720868s)
	I0127 13:34:58.514435  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514450  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.514846  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.514876  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.514894  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.514909  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514920  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.515161  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.515197  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.515860  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516243  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124532885s)
	I0127 13:34:58.516270  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516281  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.516739  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.516757  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.516768  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516776  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516787  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.517207  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.517230  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549206  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.549228  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.549614  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.549638  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549648  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260116  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.620545789s)
	I0127 13:34:59.260244  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260271  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.260620  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260713  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.260730  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.260746  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260761  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.261011  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.261041  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.261061  426243 addons.go:479] Verifying addon metrics-server=true in "embed-certs-174381"
	I0127 13:34:59.395546  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.395572  426243 pod_ready.go:82] duration metric: took 2.027244475s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.395586  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407673  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.407695  426243 pod_ready.go:82] duration metric: took 12.102291ms for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407705  426243 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417168  426243 pod_ready.go:93] pod "etcd-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.417190  426243 pod_ready.go:82] duration metric: took 9.47928ms for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417199  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:00.168433  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.875044372s)
	I0127 13:35:00.168496  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.168520  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.168866  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.170590  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.170645  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.170666  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.170673  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.171042  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.171132  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.171105  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.172686  426243 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-174381 addons enable metrics-server
	
	I0127 13:35:00.174376  426243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:34:59.517968  427154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.861694115s)
	I0127 13:34:59.518062  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:34:59.536180  427154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:34:59.547986  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:34:59.561566  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:34:59.561591  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:34:59.561645  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:34:59.574802  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:34:59.574872  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:34:59.588185  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:34:59.598292  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:34:59.598356  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:34:59.608921  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.621764  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:34:59.621825  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.635526  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:34:59.646582  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:34:59.646644  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:34:59.657975  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:34:59.745239  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:34:59.745337  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:59.946676  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:59.946890  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:59.947050  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:35:00.183580  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:55.388471  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:55.388933  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:55.388964  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:55.388914  429104 retry.go:31] will retry after 1.346738272s: waiting for domain to come up
	I0127 13:34:56.737433  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:56.738024  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:56.738081  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:56.738007  429104 retry.go:31] will retry after 1.120347472s: waiting for domain to come up
	I0127 13:34:57.860265  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:57.860912  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:57.860943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:57.860882  429104 retry.go:31] will retry after 2.152534572s: waiting for domain to come up
	I0127 13:35:00.015953  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:00.016579  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:00.016613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:00.016544  429104 retry.go:31] will retry after 2.588698804s: waiting for domain to come up
	I0127 13:35:00.184950  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:35:00.185049  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:35:00.185140  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:35:00.185334  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:35:00.185435  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:35:00.186094  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:35:00.186301  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:35:00.187022  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:35:00.187455  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:35:00.187928  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:35:00.188334  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:35:00.188531  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:35:00.188608  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:35:00.344156  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:35:00.836083  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:35:00.964664  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:35:01.072929  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:35:01.092946  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:35:01.097538  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:35:01.097961  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:35:01.292953  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:35:00.175566  426243 addons.go:514] duration metric: took 3.191465201s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:35:01.424773  426243 pod_ready.go:103] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"False"
	I0127 13:35:01.924012  426243 pod_ready.go:93] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:01.924044  426243 pod_ready.go:82] duration metric: took 2.506836977s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:01.924057  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.607848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:02.608639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:02.608669  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:02.608620  429104 retry.go:31] will retry after 2.763044938s: waiting for domain to come up
	I0127 13:35:01.294375  427154 out.go:235]   - Booting up control plane ...
	I0127 13:35:01.294569  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:35:01.306014  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:35:01.309847  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:35:01.310062  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:35:01.312436  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:35:02.931062  426243 pod_ready.go:93] pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.931095  426243 pod_ready.go:82] duration metric: took 1.007026875s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.931108  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936917  426243 pod_ready.go:93] pod "kube-proxy-cjsf9" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.936945  426243 pod_ready.go:82] duration metric: took 5.828276ms for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936957  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444155  426243 pod_ready.go:93] pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:03.444192  426243 pod_ready.go:82] duration metric: took 507.225554ms for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444203  426243 pod_ready.go:39] duration metric: took 6.091349359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:35:03.444226  426243 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:03.444294  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:03.488162  426243 api_server.go:72] duration metric: took 6.504085901s to wait for apiserver process to appear ...
	I0127 13:35:03.488197  426243 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:03.488224  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:35:03.493586  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 13:35:03.494867  426243 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:03.494894  426243 api_server.go:131] duration metric: took 6.689991ms to wait for apiserver health ...
	I0127 13:35:03.494903  426243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:03.575835  426243 system_pods.go:59] 9 kube-system pods found
	I0127 13:35:03.575871  426243 system_pods.go:61] "coredns-668d6bf9bc-9ncnm" [8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7] Running
	I0127 13:35:03.575877  426243 system_pods.go:61] "coredns-668d6bf9bc-hjncm" [68641e50-9f99-4811-9752-c7dc0db47502] Running
	I0127 13:35:03.575881  426243 system_pods.go:61] "etcd-embed-certs-174381" [fc5cb0ba-724d-4b3d-a6d0-65644ed57d99] Running
	I0127 13:35:03.575886  426243 system_pods.go:61] "kube-apiserver-embed-certs-174381" [7afdc2d3-86bd-480d-a081-e1475ff21346] Running
	I0127 13:35:03.575890  426243 system_pods.go:61] "kube-controller-manager-embed-certs-174381" [fa410171-2b30-4c79-97d4-87c1549fd75c] Running
	I0127 13:35:03.575894  426243 system_pods.go:61] "kube-proxy-cjsf9" [c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6] Running
	I0127 13:35:03.575901  426243 system_pods.go:61] "kube-scheduler-embed-certs-174381" [ab92b381-fb78-4aa1-bc55-4e47a58f2c32] Running
	I0127 13:35:03.575908  426243 system_pods.go:61] "metrics-server-f79f97bbb-hxlwf" [cb779c78-85f9-48e7-88c3-f087f57547e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:03.575913  426243 system_pods.go:61] "storage-provisioner" [3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1] Running
	I0127 13:35:03.575922  426243 system_pods.go:74] duration metric: took 81.012821ms to wait for pod list to return data ...
	I0127 13:35:03.575931  426243 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:03.772597  426243 default_sa.go:45] found service account: "default"
	I0127 13:35:03.772641  426243 default_sa.go:55] duration metric: took 196.700969ms for default service account to be created ...
	I0127 13:35:03.772655  426243 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:35:03.976966  426243 system_pods.go:87] 9 kube-system pods found
	I0127 13:35:05.375624  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:05.376167  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:05.376199  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:05.376124  429104 retry.go:31] will retry after 2.824398155s: waiting for domain to come up
	I0127 13:35:08.203385  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:08.203848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:08.203881  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:08.203823  429104 retry.go:31] will retry after 4.529537578s: waiting for domain to come up
	I0127 13:35:12.735786  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736343  429070 main.go:141] libmachine: (newest-cni-639843) found domain IP: 192.168.50.22
	I0127 13:35:12.736364  429070 main.go:141] libmachine: (newest-cni-639843) reserving static IP address...
	I0127 13:35:12.736378  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has current primary IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736707  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.736748  429070 main.go:141] libmachine: (newest-cni-639843) reserved static IP address 192.168.50.22 for domain newest-cni-639843
	I0127 13:35:12.736770  429070 main.go:141] libmachine: (newest-cni-639843) DBG | skip adding static IP to network mk-newest-cni-639843 - found existing host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"}
	I0127 13:35:12.736785  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Getting to WaitForSSH function...
	I0127 13:35:12.736810  429070 main.go:141] libmachine: (newest-cni-639843) waiting for SSH...
	I0127 13:35:12.739230  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739563  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.739592  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739721  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH client type: external
	I0127 13:35:12.739746  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa (-rw-------)
	I0127 13:35:12.739781  429070 main.go:141] libmachine: (newest-cni-639843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:35:12.739791  429070 main.go:141] libmachine: (newest-cni-639843) DBG | About to run SSH command:
	I0127 13:35:12.739800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | exit 0
	I0127 13:35:12.866664  429070 main.go:141] libmachine: (newest-cni-639843) DBG | SSH cmd err, output: <nil>: 
	I0127 13:35:12.867059  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetConfigRaw
	I0127 13:35:12.867776  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:12.870461  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.870943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.870979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.871221  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:35:12.871401  429070 machine.go:93] provisionDockerMachine start ...
	I0127 13:35:12.871421  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:12.871618  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.873979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874373  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.874411  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874581  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.874746  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.874903  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.875063  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.875221  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.875426  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.875440  429070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:35:12.979102  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:35:12.979140  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979406  429070 buildroot.go:166] provisioning hostname "newest-cni-639843"
	I0127 13:35:12.979435  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.982631  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983000  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.983025  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983170  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.983324  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983447  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983605  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.983809  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.984033  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.984051  429070 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-639843 && echo "newest-cni-639843" | sudo tee /etc/hostname
	I0127 13:35:13.107964  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-639843
	
	I0127 13:35:13.108004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.111168  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111589  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.111617  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111790  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.111995  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112158  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112289  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.112481  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.112709  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.112733  429070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-639843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-639843/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-639843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:35:13.226643  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:35:13.226683  429070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:35:13.226734  429070 buildroot.go:174] setting up certificates
	I0127 13:35:13.226749  429070 provision.go:84] configureAuth start
	I0127 13:35:13.226767  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:13.227060  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:13.230284  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230719  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.230752  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.233444  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233798  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.233832  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233972  429070 provision.go:143] copyHostCerts
	I0127 13:35:13.234039  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:35:13.234053  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:35:13.234146  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:35:13.234301  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:35:13.234313  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:35:13.234354  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:35:13.234450  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:35:13.234462  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:35:13.234497  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:35:13.234598  429070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.newest-cni-639843 san=[127.0.0.1 192.168.50.22 localhost minikube newest-cni-639843]
	I0127 13:35:13.505038  429070 provision.go:177] copyRemoteCerts
	I0127 13:35:13.505119  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:35:13.505154  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.508162  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508530  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.508555  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508759  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.508944  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.509117  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.509267  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:13.595888  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:35:13.621151  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:35:13.647473  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:35:13.673605  429070 provision.go:87] duration metric: took 446.83901ms to configureAuth
	I0127 13:35:13.673655  429070 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:35:13.673889  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:13.674004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.676982  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677392  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.677421  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677573  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.677762  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.677972  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.678123  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.678273  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.678496  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.678527  429070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:35:13.921465  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:35:13.921494  429070 machine.go:96] duration metric: took 1.050079095s to provisionDockerMachine
	I0127 13:35:13.921510  429070 start.go:293] postStartSetup for "newest-cni-639843" (driver="kvm2")
	I0127 13:35:13.921522  429070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:35:13.921543  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:13.921954  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:35:13.922025  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.925574  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.925941  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.926012  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.926266  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.926493  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.926675  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.926888  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.014753  429070 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:35:14.019344  429070 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:35:14.019374  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:35:14.019439  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:35:14.019540  429070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:35:14.019659  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:35:14.031277  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:14.060121  429070 start.go:296] duration metric: took 138.59357ms for postStartSetup
	I0127 13:35:14.060165  429070 fix.go:56] duration metric: took 23.600678344s for fixHost
	I0127 13:35:14.060188  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.063145  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063514  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.063542  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063761  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.063980  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064176  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064340  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.064541  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:14.064724  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:14.064738  429070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:35:14.172785  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984914.150810987
	
	I0127 13:35:14.172823  429070 fix.go:216] guest clock: 1737984914.150810987
	I0127 13:35:14.172832  429070 fix.go:229] Guest: 2025-01-27 13:35:14.150810987 +0000 UTC Remote: 2025-01-27 13:35:14.060169498 +0000 UTC m=+23.763612053 (delta=90.641489ms)
	I0127 13:35:14.172889  429070 fix.go:200] guest clock delta is within tolerance: 90.641489ms
	I0127 13:35:14.172905  429070 start.go:83] releasing machines lock for "newest-cni-639843", held for 23.713435883s
	I0127 13:35:14.172938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.173202  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:14.176163  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176559  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.176600  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177182  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177351  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177450  429070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:35:14.177498  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.177596  429070 ssh_runner.go:195] Run: cat /version.json
	I0127 13:35:14.177625  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.180456  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180561  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180838  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180910  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.180914  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180944  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.181150  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.181189  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181344  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181357  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181546  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181536  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.181739  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.283980  429070 ssh_runner.go:195] Run: systemctl --version
	I0127 13:35:14.290329  429070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:35:14.450608  429070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:35:14.461512  429070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:35:14.461597  429070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:35:14.482924  429070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:35:14.482951  429070 start.go:495] detecting cgroup driver to use...
	I0127 13:35:14.483022  429070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:35:14.503452  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:35:14.517592  429070 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:35:14.517659  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:35:14.532792  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:35:14.547306  429070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:35:14.671116  429070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:35:14.818034  429070 docker.go:233] disabling docker service ...
	I0127 13:35:14.818133  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:35:14.832550  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:35:14.845137  429070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:35:14.986833  429070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:35:15.122943  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:35:15.137706  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:35:15.157591  429070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:35:15.157669  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.168185  429070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:35:15.168268  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.178876  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.188792  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.198951  429070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:35:15.209169  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.219549  429070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.238633  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.249729  429070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:35:15.259178  429070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:35:15.259244  429070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:35:15.272097  429070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:35:15.281611  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:15.403472  429070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:35:15.498842  429070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:35:15.498928  429070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:35:15.505405  429070 start.go:563] Will wait 60s for crictl version
	I0127 13:35:15.505478  429070 ssh_runner.go:195] Run: which crictl
	I0127 13:35:15.509869  429070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:35:15.580026  429070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:35:15.580122  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.609376  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.643173  429070 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:35:15.644483  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:15.647483  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.647905  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:15.647930  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.648148  429070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:35:15.652911  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:15.668696  429070 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:35:15.670127  429070 kubeadm.go:883] updating cluster {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:35:15.670264  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:35:15.670328  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:15.716362  429070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:35:15.716455  429070 ssh_runner.go:195] Run: which lz4
	I0127 13:35:15.721254  429070 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:35:15.727443  429070 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:35:15.727478  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 13:35:17.208454  429070 crio.go:462] duration metric: took 1.487249966s to copy over tarball
	I0127 13:35:17.208542  429070 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:35:19.421239  429070 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.212662568s)
	I0127 13:35:19.421271  429070 crio.go:469] duration metric: took 2.21278342s to extract the tarball
	I0127 13:35:19.421281  429070 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:35:19.461756  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:19.504974  429070 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:35:19.505005  429070 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:35:19.505015  429070 kubeadm.go:934] updating node { 192.168.50.22 8443 v1.32.1 crio true true} ...
	I0127 13:35:19.505173  429070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-639843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:35:19.505269  429070 ssh_runner.go:195] Run: crio config
	I0127 13:35:19.556732  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:19.556754  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:19.556766  429070 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:35:19.556791  429070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.22 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639843 NodeName:newest-cni-639843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:35:19.556951  429070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:35:19.557032  429070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:35:19.567405  429070 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:35:19.567483  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:35:19.577572  429070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 13:35:19.595555  429070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:35:19.612336  429070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 13:35:19.630199  429070 ssh_runner.go:195] Run: grep 192.168.50.22	control-plane.minikube.internal$ /etc/hosts
	I0127 13:35:19.634268  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:19.646912  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:19.764087  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:19.783083  429070 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843 for IP: 192.168.50.22
	I0127 13:35:19.783115  429070 certs.go:194] generating shared ca certs ...
	I0127 13:35:19.783139  429070 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:19.783330  429070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:35:19.783386  429070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:35:19.783400  429070 certs.go:256] generating profile certs ...
	I0127 13:35:19.783534  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/client.key
	I0127 13:35:19.783619  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key.505bfb94
	I0127 13:35:19.783671  429070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key
	I0127 13:35:19.783826  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:35:19.783866  429070 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:35:19.783880  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:35:19.783913  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:35:19.783939  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:35:19.783961  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:35:19.784010  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:19.784667  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:35:19.821550  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:35:19.860184  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:35:19.893311  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:35:19.926181  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:35:19.954565  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:35:19.997938  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:35:20.022058  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:35:20.045748  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:35:20.069279  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:35:20.092959  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:35:20.117180  429070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:35:20.135202  429070 ssh_runner.go:195] Run: openssl version
	I0127 13:35:20.141197  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:35:20.152160  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156810  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156871  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.162645  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:35:20.174920  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:35:20.187426  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192129  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192174  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.198019  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:35:20.210195  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:35:20.220934  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225588  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225622  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.231516  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:35:20.243779  429070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:35:20.248511  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:35:20.254523  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:35:20.260441  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:35:20.266429  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:35:20.272290  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:35:20.278051  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:35:20.284024  429070 kubeadm.go:392] StartCluster: {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:35:20.284105  429070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:35:20.284164  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.332523  429070 cri.go:89] found id: ""
	I0127 13:35:20.332587  429070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:35:20.344932  429070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:35:20.344959  429070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:35:20.345011  429070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:35:20.355729  429070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:35:20.356795  429070 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-639843" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:20.357505  429070 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-639843" cluster setting kubeconfig missing "newest-cni-639843" context setting]
	I0127 13:35:20.358374  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:20.360037  429070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:35:20.371572  429070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.22
	I0127 13:35:20.371606  429070 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:35:20.371622  429070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:35:20.371679  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.418797  429070 cri.go:89] found id: ""
	I0127 13:35:20.418873  429070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:35:20.437304  429070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:35:20.447636  429070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:35:20.447660  429070 kubeadm.go:157] found existing configuration files:
	
	I0127 13:35:20.447704  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:35:20.458280  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:35:20.458335  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:35:20.469304  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:35:20.478639  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:35:20.478689  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:35:20.488624  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.497867  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:35:20.497908  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.507379  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:35:20.516362  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:35:20.516416  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:35:20.525787  429070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:35:20.542646  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:20.671597  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.498726  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.899789  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.965210  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:22.062165  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:22.062252  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:22.563318  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.063066  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.082649  429070 api_server.go:72] duration metric: took 1.020482627s to wait for apiserver process to appear ...
	I0127 13:35:23.082686  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:23.082711  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:23.083244  429070 api_server.go:269] stopped: https://192.168.50.22:8443/healthz: Get "https://192.168.50.22:8443/healthz": dial tcp 192.168.50.22:8443: connect: connection refused
	I0127 13:35:23.583699  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.503776  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:35:25.503807  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:35:25.503825  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.547403  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.547434  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:25.583659  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.589328  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.589357  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.082833  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.087881  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.087908  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.583159  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.592115  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.592148  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:27.083703  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.090407  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.098905  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.098928  429070 api_server.go:131] duration metric: took 4.01623437s to wait for apiserver health ...
	I0127 13:35:27.098938  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:27.098944  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:27.100651  429070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:35:27.101855  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:35:27.116286  429070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:35:27.139348  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.158680  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.158717  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.158730  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.158741  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.158748  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.158757  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:35:27.158766  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.158776  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.158785  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:35:27.158819  429070 system_pods.go:74] duration metric: took 19.446392ms to wait for pod list to return data ...
	I0127 13:35:27.158832  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.168338  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.168376  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.168392  429070 node_conditions.go:105] duration metric: took 9.550643ms to run NodePressure ...
	I0127 13:35:27.168416  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:27.459759  429070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:35:27.473184  429070 ops.go:34] apiserver oom_adj: -16
	I0127 13:35:27.473212  429070 kubeadm.go:597] duration metric: took 7.128244476s to restartPrimaryControlPlane
	I0127 13:35:27.473226  429070 kubeadm.go:394] duration metric: took 7.18920723s to StartCluster
	I0127 13:35:27.473251  429070 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.473341  429070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:27.475111  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.475373  429070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:35:27.475451  429070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:35:27.475562  429070 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-639843"
	I0127 13:35:27.475584  429070 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-639843"
	W0127 13:35:27.475598  429070 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:35:27.475598  429070 addons.go:69] Setting dashboard=true in profile "newest-cni-639843"
	I0127 13:35:27.475600  429070 addons.go:69] Setting metrics-server=true in profile "newest-cni-639843"
	I0127 13:35:27.475621  429070 addons.go:238] Setting addon dashboard=true in "newest-cni-639843"
	I0127 13:35:27.475629  429070 addons.go:238] Setting addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:27.475639  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	W0127 13:35:27.475643  429070 addons.go:247] addon metrics-server should already be in state true
	I0127 13:35:27.475676  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.475582  429070 addons.go:69] Setting default-storageclass=true in profile "newest-cni-639843"
	I0127 13:35:27.475611  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:27.475708  429070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-639843"
	W0127 13:35:27.475630  429070 addons.go:247] addon dashboard should already be in state true
	I0127 13:35:27.475812  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.476070  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476077  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476115  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476134  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476159  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476168  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476195  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476204  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.477011  429070 out.go:177] * Verifying Kubernetes components...
	I0127 13:35:27.478509  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:27.493703  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0127 13:35:27.493801  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I0127 13:35:27.493955  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0127 13:35:27.494221  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494259  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494795  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494819  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.494840  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494932  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494956  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495188  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495296  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495464  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.495481  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495764  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495798  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495812  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495819  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495871  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.496119  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0127 13:35:27.496433  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.496529  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.496572  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.496893  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.496916  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.497264  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.497502  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.502029  429070 addons.go:238] Setting addon default-storageclass=true in "newest-cni-639843"
	W0127 13:35:27.502051  429070 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:35:27.502080  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.502830  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.502873  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.512816  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I0127 13:35:27.513096  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0127 13:35:27.513275  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I0127 13:35:27.535151  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535226  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535266  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535748  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535766  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.535769  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535791  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.536087  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536347  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536392  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.536559  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.537321  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.537343  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.537676  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.537946  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.538406  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539127  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539700  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.540468  429070 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:35:27.540479  429070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:35:27.541259  429070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:35:27.542133  429070 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.542154  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:35:27.542174  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.542782  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:35:27.542801  429070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:35:27.542820  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.543610  429070 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:35:27.544743  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:35:27.544762  429070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:35:27.544780  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.545935  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546330  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.546364  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546495  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.546708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.546872  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.547017  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.547822  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548084  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548244  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548291  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548448  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548585  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548619  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548786  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548800  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.548938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548980  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.549036  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.549180  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.554799  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0127 13:35:27.555253  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.555780  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.555800  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.556187  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.556616  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.556646  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.574277  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0127 13:35:27.574815  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.575396  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.575420  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.575741  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.575966  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.577346  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.577556  429070 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.577574  429070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:35:27.577594  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.580061  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580408  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.580432  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580659  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.580836  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.580987  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.581148  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.713210  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:27.737971  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:27.738049  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:27.755609  429070 api_server.go:72] duration metric: took 280.198045ms to wait for apiserver process to appear ...
	I0127 13:35:27.755639  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:27.755660  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.765216  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.767614  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.767639  429070 api_server.go:131] duration metric: took 11.991322ms to wait for apiserver health ...
	I0127 13:35:27.767650  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.781696  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.781778  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.781799  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.781815  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.781827  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.781836  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running
	I0127 13:35:27.781862  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.781874  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.781884  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running
	I0127 13:35:27.781895  429070 system_pods.go:74] duration metric: took 14.236485ms to wait for pod list to return data ...
	I0127 13:35:27.781908  429070 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:27.787854  429070 default_sa.go:45] found service account: "default"
	I0127 13:35:27.787884  429070 default_sa.go:55] duration metric: took 5.965578ms for default service account to be created ...
	I0127 13:35:27.787899  429070 kubeadm.go:582] duration metric: took 312.493014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:35:27.787924  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.793927  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.793949  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.793961  429070 node_conditions.go:105] duration metric: took 6.028431ms to run NodePressure ...
	I0127 13:35:27.793975  429070 start.go:241] waiting for startup goroutines ...
	I0127 13:35:27.806081  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.851437  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.912936  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:35:27.912967  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:35:27.941546  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:35:27.941579  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:35:28.017628  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:35:28.017663  429070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:35:28.027973  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:35:28.028016  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:35:28.097111  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.097146  429070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:35:28.148404  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:35:28.148439  429070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:35:28.272234  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.273446  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:35:28.273473  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:35:28.324863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:35:28.324897  429070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:35:28.400474  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:35:28.400504  429070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:35:28.460550  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:35:28.460583  429070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:35:28.508999  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:35:28.509031  429070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:35:28.555538  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555570  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.555889  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.555906  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.555915  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555923  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.556151  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.556180  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.556196  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.564252  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.564277  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.564553  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.564574  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.564893  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.605863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:28.605896  429070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:35:28.650259  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:29.517093  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.66560932s)
	I0127 13:35:29.517160  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517173  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517607  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:29.517645  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517655  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.517664  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517672  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517974  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517996  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741184  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46890411s)
	I0127 13:35:29.741241  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741252  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.741558  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.741576  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741586  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741609  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.742656  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.742680  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.742692  429070 addons.go:479] Verifying addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:29.742659  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:30.069134  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.418812542s)
	I0127 13:35:30.069214  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069233  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069539  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069559  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.069568  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069575  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069840  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069856  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.071209  429070 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-639843 addons enable metrics-server
	
	I0127 13:35:30.072569  429070 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:35:30.073970  429070 addons.go:514] duration metric: took 2.598533083s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:35:30.074007  429070 start.go:246] waiting for cluster config update ...
	I0127 13:35:30.074019  429070 start.go:255] writing updated cluster config ...
	I0127 13:35:30.074258  429070 ssh_runner.go:195] Run: rm -f paused
	I0127 13:35:30.125745  429070 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:35:30.127324  429070 out.go:177] * Done! kubectl is now configured to use "newest-cni-639843" cluster and "default" namespace by default
	I0127 13:35:41.313958  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:35:41.315406  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:41.315596  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:46.316260  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:46.316520  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:56.316974  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:56.317208  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:16.318338  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:16.318524  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320677  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:56.320945  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320963  427154 kubeadm.go:310] 
	I0127 13:36:56.321020  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:36:56.321085  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:36:56.321099  427154 kubeadm.go:310] 
	I0127 13:36:56.321165  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:36:56.321228  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:36:56.321357  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:36:56.321378  427154 kubeadm.go:310] 
	I0127 13:36:56.321499  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:36:56.321545  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:36:56.321574  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:36:56.321580  427154 kubeadm.go:310] 
	I0127 13:36:56.321720  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:36:56.321827  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:36:56.321840  427154 kubeadm.go:310] 
	I0127 13:36:56.321935  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:36:56.322018  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:36:56.322099  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:36:56.322162  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:36:56.322169  427154 kubeadm.go:310] 
	I0127 13:36:56.323303  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:36:56.323399  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:36:56.323478  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 13:36:56.323617  427154 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:36:56.323664  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:36:56.804696  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:36:56.819996  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:36:56.830103  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:36:56.830120  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:36:56.830161  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:36:56.839297  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:36:56.839351  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:36:56.848603  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:36:56.857433  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:36:56.857500  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:36:56.867735  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.876669  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:36:56.876721  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.885857  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:36:56.894734  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:36:56.894788  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:36:56.904112  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:36:56.975515  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:36:56.975724  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:36:57.110596  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:36:57.110748  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:36:57.110890  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:36:57.287182  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:36:57.289124  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:36:57.289247  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:36:57.289310  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:36:57.289405  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:36:57.289504  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:36:57.289595  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:36:57.289665  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:36:57.289780  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:36:57.290345  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:36:57.291337  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:36:57.292274  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:36:57.292554  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:36:57.292622  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:36:57.586245  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:36:57.746278  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:36:57.846816  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:36:57.985775  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:36:58.007369  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:36:58.008417  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:36:58.008485  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:36:58.134182  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:36:58.136066  427154 out.go:235]   - Booting up control plane ...
	I0127 13:36:58.136194  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:36:58.148785  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:36:58.148921  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:36:58.149274  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:36:58.153395  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:37:38.155987  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:37:38.156613  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:38.156831  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:43.157356  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:43.157567  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:53.158341  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:53.158675  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:13.158624  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:13.158876  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157583  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:53.157824  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157839  427154 kubeadm.go:310] 
	I0127 13:38:53.157896  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:38:53.157954  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:38:53.157966  427154 kubeadm.go:310] 
	I0127 13:38:53.158014  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:38:53.158064  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:38:53.158222  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:38:53.158234  427154 kubeadm.go:310] 
	I0127 13:38:53.158404  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:38:53.158453  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:38:53.158483  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:38:53.158491  427154 kubeadm.go:310] 
	I0127 13:38:53.158624  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:38:53.158726  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:38:53.158741  427154 kubeadm.go:310] 
	I0127 13:38:53.158894  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:38:53.159040  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:38:53.159165  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:38:53.159264  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:38:53.159275  427154 kubeadm.go:310] 
	I0127 13:38:53.159902  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:38:53.160042  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:38:53.160128  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:38:53.160213  427154 kubeadm.go:394] duration metric: took 8m2.798471593s to StartCluster
	I0127 13:38:53.160286  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:38:53.160377  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:38:53.205471  427154 cri.go:89] found id: ""
	I0127 13:38:53.205496  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.205504  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:38:53.205510  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:38:53.205577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:38:53.240500  427154 cri.go:89] found id: ""
	I0127 13:38:53.240532  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.240543  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:38:53.240564  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:38:53.240625  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:38:53.282232  427154 cri.go:89] found id: ""
	I0127 13:38:53.282267  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.282279  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:38:53.282287  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:38:53.282354  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:38:53.315589  427154 cri.go:89] found id: ""
	I0127 13:38:53.315643  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.315659  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:38:53.315666  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:38:53.315735  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:38:53.349806  427154 cri.go:89] found id: ""
	I0127 13:38:53.349836  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.349844  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:38:53.349850  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:38:53.349906  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:38:53.382052  427154 cri.go:89] found id: ""
	I0127 13:38:53.382084  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.382095  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:38:53.382103  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:38:53.382176  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:38:53.416057  427154 cri.go:89] found id: ""
	I0127 13:38:53.416091  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.416103  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:38:53.416120  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:38:53.416185  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:38:53.449983  427154 cri.go:89] found id: ""
	I0127 13:38:53.450017  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.450029  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:38:53.450046  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:38:53.450064  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:38:53.498208  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:38:53.498242  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:38:53.552441  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:38:53.552472  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:38:53.567811  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:38:53.567841  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:38:53.646625  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:38:53.646651  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:38:53.646667  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 13:38:53.748675  427154 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:38:53.748747  427154 out.go:270] * 
	W0127 13:38:53.748849  427154 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.748865  427154 out.go:270] * 
	W0127 13:38:53.749670  427154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:38:53.753264  427154 out.go:201] 
	W0127 13:38:53.754315  427154 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.754372  427154 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:38:53.754397  427154 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:38:53.755624  427154 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.676796958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986046676777367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39cbb00d-7260-4303-b214-3639682ac1bd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.677425982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13470169-42ae-4e07-b6cc-e69513e2827e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.677470633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13470169-42ae-4e07-b6cc-e69513e2827e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.677709414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72ac350ddbd9cde1071af06f387e4750f7d91c3f65d5032db2be984cb322e3e9,PodSandboxId:47f1944af7fe11c656129dcebca6b003f06d46139f12600b33f47f9e0082bf31,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737986041510800396,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-6g7f2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4c74dacd-68f3-4f56-8135-2b6bf32f7d04,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebd2c6cf2f5fd0acc9bb2a5752b3ee5b41876ef16a5e6508eba1a510a5f5c35,PodSandboxId:80c631290a5ad60883d010b23acfa4ea65a83c46bfde22a92d20b03c38e02d9b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984798379085180,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-22f25,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: c54ef4ac-55b9-40c1-8f60-b2395d8c7cf0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcece71d1d7af5effa3c2aa7c9942f7a3ded17e169c9631a66c02b72b7aea899,PodSandboxId:816d021a4424370af0d5020c6091a0d787fe248b21ca7d857c8f426317d87512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984784139075496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a0a0c-0e17-4d75-92db-82da926e7c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2895a902fb9eae4d56e1e7a6d11d8e522edbfa2e15824708a19566a0c3e0b454,PodSandboxId:1b35dbc8db050001965789c867b044a4ac28aefa8ff77d5c4e31fedbfed02a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783432963220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lw97c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 23d851ec-de2d-4c28-8fe6-a20675a26a07,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a9af0b467bde2675325e2c3f927615da43a49de4fc1b918194a0166378554,PodSandboxId:395d522667f2a1f55fb22cf7b2de52043d37dc69360881093edb3b63031a64f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783491397051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xlk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10841d1b-6c81-478f-81c9-e75635698866,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ad3d774269899a5913cf0fcb0f865b05e6b041698b051a5eddf0d40901aa43,PodSandboxId:b63ec4b0122de320dce8329216dfcbf584aec46ed1e5ba395400b706c7fdf2e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984781808728579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c43bf4-757d-44d6-9aa2-387a4ee10693,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dd15bfefd069c8ce29c3d7b9adb843d7c276c527a565a8d77dfbcfeef80b14,PodSandboxId:fdcd723b6870ad7c57d9048cc2e3de09916606ea726a05e0f0cd42bb930df230,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984770920709422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1ae77ee76889ddb43c6b5720bb38fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00aa829257f372862be501a98431379a910e013850da60cefba9aeb3b0e9eced,PodSandboxId:45e6e35c917e4fa31b3529f3b6ffcd08851047bc601a005e0ef904542636cfaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984770935917903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c26df59e25a64c27ade43ed9f2c9527,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0fd5ab20ef5e15887455321d8dbaaacdacdac6683ebebe046b40d40e3204a3,PodSandboxId:46d8dae93d1fd1f9b1f3cb28b17a6678c696d75e715b250cb7116453031b1bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984770914068473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe9e15a2c160b330491c7d13d06044c8b93308048282ce90a8c2250d311e62,PodSandboxId:1b3b2aa98e4d850551355661db0280710f07eb52eb9614934ca2bb2333133fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984770870994813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f392e32b6f443eebc2b2c817dd25832,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17edd14f1c4e8304b5ad9d12c1336f297cc7a53e584e0c4096f641a64ce83c3,PodSandboxId:d4b11bd5e5a14c517b025ae18bc225ed0b7eaa57f2e71d5462de519e38fa4be8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984486848445355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13470169-42ae-4e07-b6cc-e69513e2827e name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.716465162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=875fea3f-4e34-4a80-8e92-c9454e4488ce name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.716526962Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=875fea3f-4e34-4a80-8e92-c9454e4488ce name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.717941162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19030152-d74c-4d50-97d4-e73efdce83e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.718487408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986046718464470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19030152-d74c-4d50-97d4-e73efdce83e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.718927493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e60b181f-49ae-4ca7-a153-071926387a4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.719003140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e60b181f-49ae-4ca7-a153-071926387a4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.719385723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72ac350ddbd9cde1071af06f387e4750f7d91c3f65d5032db2be984cb322e3e9,PodSandboxId:47f1944af7fe11c656129dcebca6b003f06d46139f12600b33f47f9e0082bf31,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737986041510800396,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-6g7f2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4c74dacd-68f3-4f56-8135-2b6bf32f7d04,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebd2c6cf2f5fd0acc9bb2a5752b3ee5b41876ef16a5e6508eba1a510a5f5c35,PodSandboxId:80c631290a5ad60883d010b23acfa4ea65a83c46bfde22a92d20b03c38e02d9b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984798379085180,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-22f25,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: c54ef4ac-55b9-40c1-8f60-b2395d8c7cf0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcece71d1d7af5effa3c2aa7c9942f7a3ded17e169c9631a66c02b72b7aea899,PodSandboxId:816d021a4424370af0d5020c6091a0d787fe248b21ca7d857c8f426317d87512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984784139075496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a0a0c-0e17-4d75-92db-82da926e7c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2895a902fb9eae4d56e1e7a6d11d8e522edbfa2e15824708a19566a0c3e0b454,PodSandboxId:1b35dbc8db050001965789c867b044a4ac28aefa8ff77d5c4e31fedbfed02a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783432963220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lw97c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 23d851ec-de2d-4c28-8fe6-a20675a26a07,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a9af0b467bde2675325e2c3f927615da43a49de4fc1b918194a0166378554,PodSandboxId:395d522667f2a1f55fb22cf7b2de52043d37dc69360881093edb3b63031a64f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783491397051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xlk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10841d1b-6c81-478f-81c9-e75635698866,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ad3d774269899a5913cf0fcb0f865b05e6b041698b051a5eddf0d40901aa43,PodSandboxId:b63ec4b0122de320dce8329216dfcbf584aec46ed1e5ba395400b706c7fdf2e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984781808728579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c43bf4-757d-44d6-9aa2-387a4ee10693,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dd15bfefd069c8ce29c3d7b9adb843d7c276c527a565a8d77dfbcfeef80b14,PodSandboxId:fdcd723b6870ad7c57d9048cc2e3de09916606ea726a05e0f0cd42bb930df230,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984770920709422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1ae77ee76889ddb43c6b5720bb38fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00aa829257f372862be501a98431379a910e013850da60cefba9aeb3b0e9eced,PodSandboxId:45e6e35c917e4fa31b3529f3b6ffcd08851047bc601a005e0ef904542636cfaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984770935917903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c26df59e25a64c27ade43ed9f2c9527,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0fd5ab20ef5e15887455321d8dbaaacdacdac6683ebebe046b40d40e3204a3,PodSandboxId:46d8dae93d1fd1f9b1f3cb28b17a6678c696d75e715b250cb7116453031b1bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984770914068473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe9e15a2c160b330491c7d13d06044c8b93308048282ce90a8c2250d311e62,PodSandboxId:1b3b2aa98e4d850551355661db0280710f07eb52eb9614934ca2bb2333133fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984770870994813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f392e32b6f443eebc2b2c817dd25832,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17edd14f1c4e8304b5ad9d12c1336f297cc7a53e584e0c4096f641a64ce83c3,PodSandboxId:d4b11bd5e5a14c517b025ae18bc225ed0b7eaa57f2e71d5462de519e38fa4be8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984486848445355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e60b181f-49ae-4ca7-a153-071926387a4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.751093656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea58d81e-d1ff-49c7-b7e1-2efccf9194e8 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.751210846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea58d81e-d1ff-49c7-b7e1-2efccf9194e8 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.752834999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7a227bd-9498-45a1-a2a0-1f9f9edfb39b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.753304835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986046753285997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7a227bd-9498-45a1-a2a0-1f9f9edfb39b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.753754263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcce1e2e-ca12-4e6a-9acc-e85fe13801fb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.753819893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcce1e2e-ca12-4e6a-9acc-e85fe13801fb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.754244679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72ac350ddbd9cde1071af06f387e4750f7d91c3f65d5032db2be984cb322e3e9,PodSandboxId:47f1944af7fe11c656129dcebca6b003f06d46139f12600b33f47f9e0082bf31,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737986041510800396,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-6g7f2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4c74dacd-68f3-4f56-8135-2b6bf32f7d04,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebd2c6cf2f5fd0acc9bb2a5752b3ee5b41876ef16a5e6508eba1a510a5f5c35,PodSandboxId:80c631290a5ad60883d010b23acfa4ea65a83c46bfde22a92d20b03c38e02d9b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984798379085180,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-22f25,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: c54ef4ac-55b9-40c1-8f60-b2395d8c7cf0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcece71d1d7af5effa3c2aa7c9942f7a3ded17e169c9631a66c02b72b7aea899,PodSandboxId:816d021a4424370af0d5020c6091a0d787fe248b21ca7d857c8f426317d87512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984784139075496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a0a0c-0e17-4d75-92db-82da926e7c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2895a902fb9eae4d56e1e7a6d11d8e522edbfa2e15824708a19566a0c3e0b454,PodSandboxId:1b35dbc8db050001965789c867b044a4ac28aefa8ff77d5c4e31fedbfed02a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783432963220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lw97c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 23d851ec-de2d-4c28-8fe6-a20675a26a07,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a9af0b467bde2675325e2c3f927615da43a49de4fc1b918194a0166378554,PodSandboxId:395d522667f2a1f55fb22cf7b2de52043d37dc69360881093edb3b63031a64f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783491397051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xlk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10841d1b-6c81-478f-81c9-e75635698866,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ad3d774269899a5913cf0fcb0f865b05e6b041698b051a5eddf0d40901aa43,PodSandboxId:b63ec4b0122de320dce8329216dfcbf584aec46ed1e5ba395400b706c7fdf2e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984781808728579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c43bf4-757d-44d6-9aa2-387a4ee10693,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dd15bfefd069c8ce29c3d7b9adb843d7c276c527a565a8d77dfbcfeef80b14,PodSandboxId:fdcd723b6870ad7c57d9048cc2e3de09916606ea726a05e0f0cd42bb930df230,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984770920709422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1ae77ee76889ddb43c6b5720bb38fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00aa829257f372862be501a98431379a910e013850da60cefba9aeb3b0e9eced,PodSandboxId:45e6e35c917e4fa31b3529f3b6ffcd08851047bc601a005e0ef904542636cfaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984770935917903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c26df59e25a64c27ade43ed9f2c9527,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0fd5ab20ef5e15887455321d8dbaaacdacdac6683ebebe046b40d40e3204a3,PodSandboxId:46d8dae93d1fd1f9b1f3cb28b17a6678c696d75e715b250cb7116453031b1bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984770914068473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe9e15a2c160b330491c7d13d06044c8b93308048282ce90a8c2250d311e62,PodSandboxId:1b3b2aa98e4d850551355661db0280710f07eb52eb9614934ca2bb2333133fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984770870994813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f392e32b6f443eebc2b2c817dd25832,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17edd14f1c4e8304b5ad9d12c1336f297cc7a53e584e0c4096f641a64ce83c3,PodSandboxId:d4b11bd5e5a14c517b025ae18bc225ed0b7eaa57f2e71d5462de519e38fa4be8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984486848445355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcce1e2e-ca12-4e6a-9acc-e85fe13801fb name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.791447334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97b3999a-0e12-4139-b5e8-5986207bd228 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.791525258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97b3999a-0e12-4139-b5e8-5986207bd228 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.792700031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51bde876-cdee-493b-9647-94a0bbf69691 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.793131174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986046793076431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51bde876-cdee-493b-9647-94a0bbf69691 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.793599610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a1edaca-4c45-46a5-adf6-521ee66c74cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.793669516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a1edaca-4c45-46a5-adf6-521ee66c74cd name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:06 no-preload-563155 crio[720]: time="2025-01-27 13:54:06.794297459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:72ac350ddbd9cde1071af06f387e4750f7d91c3f65d5032db2be984cb322e3e9,PodSandboxId:47f1944af7fe11c656129dcebca6b003f06d46139f12600b33f47f9e0082bf31,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737986041510800396,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-6g7f2,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4c74dacd-68f3-4f56-8135-2b6bf32f7d04,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eebd2c6cf2f5fd0acc9bb2a5752b3ee5b41876ef16a5e6508eba1a510a5f5c35,PodSandboxId:80c631290a5ad60883d010b23acfa4ea65a83c46bfde22a92d20b03c38e02d9b,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984798379085180,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-22f25,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: c54ef4ac-55b9-40c1-8f60-b2395d8c7cf0,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcece71d1d7af5effa3c2aa7c9942f7a3ded17e169c9631a66c02b72b7aea899,PodSandboxId:816d021a4424370af0d5020c6091a0d787fe248b21ca7d857c8f426317d87512,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984784139075496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e8a0a0c-0e17-4d75-92db-82da926e7c42,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2895a902fb9eae4d56e1e7a6d11d8e522edbfa2e15824708a19566a0c3e0b454,PodSandboxId:1b35dbc8db050001965789c867b044a4ac28aefa8ff77d5c4e31fedbfed02a23,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783432963220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lw97c,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 23d851ec-de2d-4c28-8fe6-a20675a26a07,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2a9af0b467bde2675325e2c3f927615da43a49de4fc1b918194a0166378554,PodSandboxId:395d522667f2a1f55fb22cf7b2de52043d37dc69360881093edb3b63031a64f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984783491397051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xlk4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10841d1b-6c81-478f-81c9-e75635698866,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ad3d774269899a5913cf0fcb0f865b05e6b041698b051a5eddf0d40901aa43,PodSandboxId:b63ec4b0122de320dce8329216dfcbf584aec46ed1e5ba395400b706c7fdf2e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984781808728579,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pn8rl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56c43bf4-757d-44d6-9aa2-387a4ee10693,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17dd15bfefd069c8ce29c3d7b9adb843d7c276c527a565a8d77dfbcfeef80b14,PodSandboxId:fdcd723b6870ad7c57d9048cc2e3de09916606ea726a05e0f0cd42bb930df230,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad85
10e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984770920709422,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e1ae77ee76889ddb43c6b5720bb38fb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00aa829257f372862be501a98431379a910e013850da60cefba9aeb3b0e9eced,PodSandboxId:45e6e35c917e4fa31b3529f3b6ffcd08851047bc601a005e0ef904542636cfaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984770935917903,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c26df59e25a64c27ade43ed9f2c9527,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae0fd5ab20ef5e15887455321d8dbaaacdacdac6683ebebe046b40d40e3204a3,PodSandboxId:46d8dae93d1fd1f9b1f3cb28b17a6678c696d75e715b250cb7116453031b1bc8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984770914068473,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27fe9e15a2c160b330491c7d13d06044c8b93308048282ce90a8c2250d311e62,PodSandboxId:1b3b2aa98e4d850551355661db0280710f07eb52eb9614934ca2bb2333133fb6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984770870994813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f392e32b6f443eebc2b2c817dd25832,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e17edd14f1c4e8304b5ad9d12c1336f297cc7a53e584e0c4096f641a64ce83c3,PodSandboxId:d4b11bd5e5a14c517b025ae18bc225ed0b7eaa57f2e71d5462de519e38fa4be8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984486848445355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-563155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe97b26a2b9f6d4f1a8beab8b5fedc12,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a1edaca-4c45-46a5-adf6-521ee66c74cd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	72ac350ddbd9c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   9                   47f1944af7fe1       dashboard-metrics-scraper-86c6bf9756-6g7f2
	eebd2c6cf2f5f       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   80c631290a5ad       kubernetes-dashboard-7779f9b69b-22f25
	dcece71d1d7af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   816d021a44243       storage-provisioner
	ea2a9af0b467b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   395d522667f2a       coredns-668d6bf9bc-xlk4d
	2895a902fb9ea       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   1b35dbc8db050       coredns-668d6bf9bc-lw97c
	34ad3d7742698       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           21 minutes ago      Running             kube-proxy                  0                   b63ec4b0122de       kube-proxy-pn8rl
	00aa829257f37       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           21 minutes ago      Running             kube-scheduler              2                   45e6e35c917e4       kube-scheduler-no-preload-563155
	17dd15bfefd06       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   fdcd723b6870a       etcd-no-preload-563155
	ae0fd5ab20ef5       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           21 minutes ago      Running             kube-apiserver              2                   46d8dae93d1fd       kube-apiserver-no-preload-563155
	27fe9e15a2c16       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           21 minutes ago      Running             kube-controller-manager     2                   1b3b2aa98e4d8       kube-controller-manager-no-preload-563155
	e17edd14f1c4e       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           26 minutes ago      Exited              kube-apiserver              1                   d4b11bd5e5a14       kube-apiserver-no-preload-563155
	
	
	==> coredns [2895a902fb9eae4d56e1e7a6d11d8e522edbfa2e15824708a19566a0c3e0b454] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ea2a9af0b467bde2675325e2c3f927615da43a49de4fc1b918194a0166378554] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-563155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-563155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=no-preload-563155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_32_57_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:32:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-563155
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:53:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:51:51 +0000   Mon, 27 Jan 2025 13:32:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:51:51 +0000   Mon, 27 Jan 2025 13:32:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:51:51 +0000   Mon, 27 Jan 2025 13:32:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:51:51 +0000   Mon, 27 Jan 2025 13:32:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.130
	  Hostname:    no-preload-563155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 240d51ca09be413f9ceb546510b728b9
	  System UUID:                240d51ca-09be-413f-9ceb-546510b728b9
	  Boot ID:                    e839a9bb-4435-4248-a337-c9546c37a840
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-lw97c                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-xlk4d                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-no-preload-563155                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-563155              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-563155     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-pn8rl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-563155              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-bdcxg                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-6g7f2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-22f25         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node no-preload-563155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node no-preload-563155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node no-preload-563155 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node no-preload-563155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node no-preload-563155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node no-preload-563155 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node no-preload-563155 event: Registered Node no-preload-563155 in Controller
	
	
	==> dmesg <==
	[  +4.923981] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.729444] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597480] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.502891] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.055206] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066262] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.183716] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.111191] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.251709] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[Jan27 13:28] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.058417] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.723884] systemd-fstab-generator[1448]: Ignoring "noauto" option for root device
	[  +3.966562] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.431969] kauditd_printk_skb: 86 callbacks suppressed
	[Jan27 13:32] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.265486] systemd-fstab-generator[3217]: Ignoring "noauto" option for root device
	[  +4.654135] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.907858] systemd-fstab-generator[3562]: Ignoring "noauto" option for root device
	[Jan27 13:33] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.088265] systemd-fstab-generator[3704]: Ignoring "noauto" option for root device
	[  +8.993857] kauditd_printk_skb: 110 callbacks suppressed
	[  +7.795272] kauditd_printk_skb: 7 callbacks suppressed
	[ +27.110475] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [17dd15bfefd069c8ce29c3d7b9adb843d7c276c527a565a8d77dfbcfeef80b14] <==
	{"level":"warn","ts":"2025-01-27T13:35:21.473593Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.177804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:21.473647Z","caller":"traceutil/trace.go:171","msg":"trace[433971052] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:720; }","duration":"398.255804ms","start":"2025-01-27T13:35:21.075381Z","end":"2025-01-27T13:35:21.473637Z","steps":["trace[433971052] 'agreement among raft nodes before linearized reading'  (duration: 398.17102ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:35:21.473676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:35:21.075367Z","time spent":"398.297848ms","remote":"127.0.0.1:38626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T13:35:21.473732Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"370.248076ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:21.473846Z","caller":"traceutil/trace.go:171","msg":"trace[798405007] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:720; }","duration":"370.387337ms","start":"2025-01-27T13:35:21.103447Z","end":"2025-01-27T13:35:21.473834Z","steps":["trace[798405007] 'agreement among raft nodes before linearized reading'  (duration: 370.177786ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:35:21.982148Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":14102059782328618589,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-01-27T13:35:22.028448Z","caller":"traceutil/trace.go:171","msg":"trace[129463285] linearizableReadLoop","detail":"{readStateIndex:763; appliedIndex:762; }","duration":"546.945302ms","start":"2025-01-27T13:35:21.481482Z","end":"2025-01-27T13:35:22.028428Z","steps":["trace[129463285] 'read index received'  (duration: 546.606072ms)","trace[129463285] 'applied index is now lower than readState.Index'  (duration: 338.476µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:35:22.028647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"547.136026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:22.028713Z","caller":"traceutil/trace.go:171","msg":"trace[1442452718] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:721; }","duration":"547.234884ms","start":"2025-01-27T13:35:21.481465Z","end":"2025-01-27T13:35:22.028700Z","steps":["trace[1442452718] 'agreement among raft nodes before linearized reading'  (duration: 547.12569ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:35:22.028768Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:35:21.481456Z","time spent":"547.302066ms","remote":"127.0.0.1:38626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-27T13:35:22.028753Z","caller":"traceutil/trace.go:171","msg":"trace[922615168] transaction","detail":"{read_only:false; response_revision:721; number_of_response:1; }","duration":"549.013326ms","start":"2025-01-27T13:35:21.479722Z","end":"2025-01-27T13:35:22.028735Z","steps":["trace[922615168] 'process raft request'  (duration: 548.381303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:35:22.029058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:35:21.479707Z","time spent":"549.287996ms","remote":"127.0.0.1:38596","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:719 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-27T13:35:22.531848Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"429.386462ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:22.531938Z","caller":"traceutil/trace.go:171","msg":"trace[1762610538] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:721; }","duration":"429.499119ms","start":"2025-01-27T13:35:22.102424Z","end":"2025-01-27T13:35:22.531923Z","steps":["trace[1762610538] 'range keys from in-memory index tree'  (duration: 429.371705ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:35:22.533041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.017002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:22.533147Z","caller":"traceutil/trace.go:171","msg":"trace[1713771783] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:721; }","duration":"258.150529ms","start":"2025-01-27T13:35:22.274983Z","end":"2025-01-27T13:35:22.533134Z","steps":["trace[1713771783] 'range keys from in-memory index tree'  (duration: 257.92852ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:42:52.111411Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":880}
	{"level":"info","ts":"2025-01-27T13:42:52.143358Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":880,"took":"31.061005ms","hash":2007357343,"current-db-size-bytes":2981888,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2981888,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T13:42:52.143649Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2007357343,"revision":880,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T13:47:52.118647Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1132}
	{"level":"info","ts":"2025-01-27T13:47:52.123769Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1132,"took":"4.052385ms","hash":3841171224,"current-db-size-bytes":2981888,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1753088,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:47:52.123878Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3841171224,"revision":1132,"compact-revision":880}
	{"level":"info","ts":"2025-01-27T13:52:52.126006Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1383}
	{"level":"info","ts":"2025-01-27T13:52:52.130380Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1383,"took":"3.551332ms","hash":3350137007,"current-db-size-bytes":2981888,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1753088,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:52:52.130464Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3350137007,"revision":1383,"compact-revision":1132}
	
	
	==> kernel <==
	 13:54:07 up 26 min,  0 users,  load average: 0.42, 0.33, 0.27
	Linux no-preload-563155 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae0fd5ab20ef5e15887455321d8dbaaacdacdac6683ebebe046b40d40e3204a3] <==
	 > logger="UnhandledError"
	I0127 13:50:54.969635       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:52:53.967372       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:52:53.967590       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 13:52:54.969539       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:52:54.969617       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 13:52:54.969708       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:52:54.969899       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:52:54.970778       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:52:54.971917       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:53:54.971714       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:53:54.972045       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 13:53:54.972124       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:53:54.972243       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:53:54.973910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:53:54.973982       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [e17edd14f1c4e8304b5ad9d12c1336f297cc7a53e584e0c4096f641a64ce83c3] <==
	W0127 13:32:46.658933       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.663932       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.668363       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.670746       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.675135       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.723533       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.737387       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.754141       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.780567       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.863270       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:46.912516       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.037126       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.105219       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.115582       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.189258       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.230778       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.231402       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.238067       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.320036       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.339888       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.375658       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.559961       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.664428       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.672041       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:32:47.742519       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [27fe9e15a2c160b330491c7d13d06044c8b93308048282ce90a8c2250d311e62] <==
	I0127 13:49:07.507901       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="88.967µs"
	I0127 13:49:21.508827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="48.378µs"
	E0127 13:49:30.777062       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:49:30.867106       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:50:00.782960       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:00.874993       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:50:30.788863       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:30.882702       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:51:00.795042       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:51:00.890583       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:51:30.801671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:51:30.900734       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:51:51.498978       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-563155"
	E0127 13:52:00.806977       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:52:00.907467       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:52:30.814768       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:52:30.915999       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:53:00.821855       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:53:00.923315       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:53:30.828688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:53:30.931443       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:54:00.835089       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:54:00.939542       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:54:01.672686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="124.673µs"
	I0127 13:54:02.676878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="74.014µs"
	
	
	==> kube-proxy [34ad3d774269899a5913cf0fcb0f865b05e6b041698b051a5eddf0d40901aa43] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:33:02.280324       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:33:02.313409       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.130"]
	E0127 13:33:02.313516       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:33:02.422738       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:33:02.422768       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:33:02.422790       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:33:02.430144       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:33:02.433977       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:33:02.433992       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:33:02.436069       1 config.go:199] "Starting service config controller"
	I0127 13:33:02.436102       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:33:02.436819       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:33:02.436827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:33:02.443531       1 config.go:329] "Starting node config controller"
	I0127 13:33:02.443570       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:33:02.536323       1 shared_informer.go:320] Caches are synced for service config
	I0127 13:33:02.537220       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 13:33:02.543806       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00aa829257f372862be501a98431379a910e013850da60cefba9aeb3b0e9eced] <==
	W0127 13:32:54.906665       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 13:32:54.906716       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:54.918375       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 13:32:54.918453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:54.927077       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 13:32:54.927298       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:54.937327       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 13:32:54.937385       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:54.993488       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 13:32:54.993616       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:55.020921       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:32:55.021005       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:55.044477       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 13:32:55.044556       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:55.102694       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 13:32:55.102783       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:55.126683       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0127 13:32:55.126767       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0127 13:32:55.143287       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 13:32:55.144376       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:55.156231       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 13:32:55.156278       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:32:55.261858       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 13:32:55.261909       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 13:32:57.016292       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:53:36 no-preload-563155 kubelet[3569]: E0127 13:53:36.861029    3569 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986016860003810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:53:46 no-preload-563155 kubelet[3569]: E0127 13:53:46.863509    3569 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986026862898084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:53:46 no-preload-563155 kubelet[3569]: E0127 13:53:46.863538    3569 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986026862898084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:53:47 no-preload-563155 kubelet[3569]: E0127 13:53:47.495266    3569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-bdcxg" podUID="c3d5e10f-b2dc-4890-96dc-c53d634f6823"
	Jan 27 13:53:49 no-preload-563155 kubelet[3569]: I0127 13:53:49.492681    3569 scope.go:117] "RemoveContainer" containerID="96623c69e186f4b8f77438de6b952c1c8b47b9d06f5eaaa1920f6b854720c90e"
	Jan 27 13:53:49 no-preload-563155 kubelet[3569]: E0127 13:53:49.492874    3569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-6g7f2_kubernetes-dashboard(4c74dacd-68f3-4f56-8135-2b6bf32f7d04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-6g7f2" podUID="4c74dacd-68f3-4f56-8135-2b6bf32f7d04"
	Jan 27 13:53:56 no-preload-563155 kubelet[3569]: E0127 13:53:56.550583    3569 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 13:53:56 no-preload-563155 kubelet[3569]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 13:53:56 no-preload-563155 kubelet[3569]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 13:53:56 no-preload-563155 kubelet[3569]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 13:53:56 no-preload-563155 kubelet[3569]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:53:56 no-preload-563155 kubelet[3569]: E0127 13:53:56.866713    3569 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986036864503976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:53:56 no-preload-563155 kubelet[3569]: E0127 13:53:56.866910    3569 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986036864503976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: I0127 13:54:01.492963    3569 scope.go:117] "RemoveContainer" containerID="96623c69e186f4b8f77438de6b952c1c8b47b9d06f5eaaa1920f6b854720c90e"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: E0127 13:54:01.524503    3569 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: E0127 13:54:01.524608    3569 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: E0127 13:54:01.524848    3569 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9jzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-bdcxg_kube-system(c3d5e10f-b2dc-4890-96dc-c53d634f6823): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: E0127 13:54:01.526911    3569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-bdcxg" podUID="c3d5e10f-b2dc-4890-96dc-c53d634f6823"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: I0127 13:54:01.648950    3569 scope.go:117] "RemoveContainer" containerID="96623c69e186f4b8f77438de6b952c1c8b47b9d06f5eaaa1920f6b854720c90e"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: I0127 13:54:01.649361    3569 scope.go:117] "RemoveContainer" containerID="72ac350ddbd9cde1071af06f387e4750f7d91c3f65d5032db2be984cb322e3e9"
	Jan 27 13:54:01 no-preload-563155 kubelet[3569]: E0127 13:54:01.649646    3569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-6g7f2_kubernetes-dashboard(4c74dacd-68f3-4f56-8135-2b6bf32f7d04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-6g7f2" podUID="4c74dacd-68f3-4f56-8135-2b6bf32f7d04"
	Jan 27 13:54:02 no-preload-563155 kubelet[3569]: I0127 13:54:02.656856    3569 scope.go:117] "RemoveContainer" containerID="72ac350ddbd9cde1071af06f387e4750f7d91c3f65d5032db2be984cb322e3e9"
	Jan 27 13:54:02 no-preload-563155 kubelet[3569]: E0127 13:54:02.657137    3569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-6g7f2_kubernetes-dashboard(4c74dacd-68f3-4f56-8135-2b6bf32f7d04)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-6g7f2" podUID="4c74dacd-68f3-4f56-8135-2b6bf32f7d04"
	Jan 27 13:54:06 no-preload-563155 kubelet[3569]: E0127 13:54:06.869359    3569 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986046868565964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:06 no-preload-563155 kubelet[3569]: E0127 13:54:06.869381    3569 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986046868565964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152102,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [eebd2c6cf2f5fd0acc9bb2a5752b3ee5b41876ef16a5e6508eba1a510a5f5c35] <==
	2025/01/27 13:41:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:42:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:51:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:51:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:52:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:52:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:53:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:53:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [dcece71d1d7af5effa3c2aa7c9942f7a3ded17e169c9631a66c02b72b7aea899] <==
	I0127 13:33:04.270890       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 13:33:04.291439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 13:33:04.291596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 13:33:04.313138       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 13:33:04.313336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-563155_5ae65bce-781c-4816-a673-80ae8828e91a!
	I0127 13:33:04.314342       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"afceaa7f-6173-42df-adb8-926842dbdfbc", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-563155_5ae65bce-781c-4816-a673-80ae8828e91a became leader
	I0127 13:33:04.414263       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-563155_5ae65bce-781c-4816-a673-80ae8828e91a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-563155 -n no-preload-563155
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-563155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-bdcxg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-563155 describe pod metrics-server-f79f97bbb-bdcxg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-563155 describe pod metrics-server-f79f97bbb-bdcxg: exit status 1 (60.86242ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-bdcxg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-563155 describe pod metrics-server-f79f97bbb-bdcxg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1600.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1599.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-174381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 13:28:20.225107  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.267924  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.274408  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.286018  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.307418  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.348949  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.430941  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.592903  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:30.915359  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:31.557389  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:32.838758  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:34.952732  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:34.959149  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:34.970512  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:34.991877  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:35.033751  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:35.115155  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:35.276625  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:35.400532  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:35.598608  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:36.240669  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:36.343205  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-174381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: signal: killed (26m37.658999725s)

                                                
                                                
-- stdout --
	* [embed-certs-174381] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-174381" primary control-plane node in "embed-certs-174381" cluster
	* Restarting existing kvm2 VM for "embed-certs-174381" ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-174381 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:28:17.362059  426243 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:28:17.362204  426243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:17.362216  426243 out.go:358] Setting ErrFile to fd 2...
	I0127 13:28:17.362224  426243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:28:17.362519  426243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:28:17.363375  426243 out.go:352] Setting JSON to false
	I0127 13:28:17.364669  426243 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22237,"bootTime":1737962260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:28:17.364829  426243 start.go:139] virtualization: kvm guest
	I0127 13:28:17.367931  426243 out.go:177] * [embed-certs-174381] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:28:17.369377  426243 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:28:17.369405  426243 notify.go:220] Checking for updates...
	I0127 13:28:17.371807  426243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:28:17.373208  426243 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:28:17.374420  426243 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:28:17.375553  426243 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:28:17.376688  426243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:28:17.378464  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:28:17.379102  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:28:17.379165  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:17.401269  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0127 13:28:17.401648  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:17.402260  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:28:17.402289  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:17.402627  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:17.402852  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:17.403079  426243 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:28:17.403389  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:28:17.403455  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:17.418472  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0127 13:28:17.418950  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:17.419399  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:28:17.419421  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:17.419776  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:17.419982  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:17.456597  426243 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:28:17.458065  426243 start.go:297] selected driver: kvm2
	I0127 13:28:17.458084  426243 start.go:901] validating driver "kvm2" against &{Name:embed-certs-174381 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-174381 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:17.458195  426243 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:28:17.458890  426243 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:17.458973  426243 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:28:17.474592  426243 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:28:17.475035  426243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:28:17.475080  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:28:17.475143  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:28:17.475202  426243 start.go:340] cluster config:
	{Name:embed-certs-174381 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-174381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:17.475325  426243 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:28:17.476903  426243 out.go:177] * Starting "embed-certs-174381" primary control-plane node in "embed-certs-174381" cluster
	I0127 13:28:17.478008  426243 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:28:17.478047  426243 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:28:17.478059  426243 cache.go:56] Caching tarball of preloaded images
	I0127 13:28:17.478145  426243 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:28:17.478156  426243 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:28:17.478260  426243 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/config.json ...
	I0127 13:28:17.478470  426243 start.go:360] acquireMachinesLock for embed-certs-174381: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:28:29.467268  426243 start.go:364] duration metric: took 11.988769891s to acquireMachinesLock for "embed-certs-174381"
	I0127 13:28:29.467331  426243 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:28:29.467342  426243 fix.go:54] fixHost starting: 
	I0127 13:28:29.467739  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:28:29.467791  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:28:29.487753  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0127 13:28:29.488167  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:28:29.488800  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:28:29.488829  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:28:29.489240  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:28:29.489407  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:29.489593  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:28:29.491352  426243 fix.go:112] recreateIfNeeded on embed-certs-174381: state=Stopped err=<nil>
	I0127 13:28:29.491379  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	W0127 13:28:29.491524  426243 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:28:29.493892  426243 out.go:177] * Restarting existing kvm2 VM for "embed-certs-174381" ...
	I0127 13:28:29.495390  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Start
	I0127 13:28:29.495592  426243 main.go:141] libmachine: (embed-certs-174381) starting domain...
	I0127 13:28:29.495615  426243 main.go:141] libmachine: (embed-certs-174381) ensuring networks are active...
	I0127 13:28:29.496329  426243 main.go:141] libmachine: (embed-certs-174381) Ensuring network default is active
	I0127 13:28:29.496671  426243 main.go:141] libmachine: (embed-certs-174381) Ensuring network mk-embed-certs-174381 is active
	I0127 13:28:29.497082  426243 main.go:141] libmachine: (embed-certs-174381) getting domain XML...
	I0127 13:28:29.497978  426243 main.go:141] libmachine: (embed-certs-174381) creating domain...
	I0127 13:28:30.808105  426243 main.go:141] libmachine: (embed-certs-174381) waiting for IP...
	I0127 13:28:30.809046  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:30.809637  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:30.809720  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:30.809603  426345 retry.go:31] will retry after 250.492817ms: waiting for domain to come up
	I0127 13:28:31.062018  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:31.062696  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:31.062728  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:31.062672  426345 retry.go:31] will retry after 255.784532ms: waiting for domain to come up
	I0127 13:28:31.320085  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:31.320733  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:31.320804  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:31.320698  426345 retry.go:31] will retry after 427.661366ms: waiting for domain to come up
	I0127 13:28:31.750476  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:31.751027  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:31.751059  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:31.751017  426345 retry.go:31] will retry after 576.645961ms: waiting for domain to come up
	I0127 13:28:32.329416  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:32.330036  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:32.330064  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:32.330001  426345 retry.go:31] will retry after 621.533489ms: waiting for domain to come up
	I0127 13:28:32.952872  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:32.953569  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:32.953606  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:32.953533  426345 retry.go:31] will retry after 837.468294ms: waiting for domain to come up
	I0127 13:28:33.792835  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:33.793380  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:33.793462  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:33.793381  426345 retry.go:31] will retry after 721.618095ms: waiting for domain to come up
	I0127 13:28:34.516703  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:34.517299  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:34.517347  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:34.517256  426345 retry.go:31] will retry after 1.358395991s: waiting for domain to come up
	I0127 13:28:35.877855  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:35.878559  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:35.878635  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:35.878523  426345 retry.go:31] will retry after 1.42096681s: waiting for domain to come up
	I0127 13:28:37.301698  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:37.302362  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:37.302405  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:37.302345  426345 retry.go:31] will retry after 1.700725993s: waiting for domain to come up
	I0127 13:28:39.004187  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:39.004779  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:39.004829  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:39.004747  426345 retry.go:31] will retry after 2.269559822s: waiting for domain to come up
	I0127 13:28:41.275770  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:41.276409  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:41.276439  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:41.276352  426345 retry.go:31] will retry after 3.134584474s: waiting for domain to come up
	I0127 13:28:44.412813  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:44.413993  426243 main.go:141] libmachine: (embed-certs-174381) DBG | unable to find current IP address of domain embed-certs-174381 in network mk-embed-certs-174381
	I0127 13:28:44.414026  426243 main.go:141] libmachine: (embed-certs-174381) DBG | I0127 13:28:44.413931  426345 retry.go:31] will retry after 3.395780916s: waiting for domain to come up
	I0127 13:28:47.811657  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.812180  426243 main.go:141] libmachine: (embed-certs-174381) found domain IP: 192.168.39.7
	I0127 13:28:47.812208  426243 main.go:141] libmachine: (embed-certs-174381) reserving static IP address...
	I0127 13:28:47.812218  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has current primary IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.812654  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "embed-certs-174381", mac: "52:54:00:dd:cc:c6", ip: "192.168.39.7"} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:47.812690  426243 main.go:141] libmachine: (embed-certs-174381) DBG | skip adding static IP to network mk-embed-certs-174381 - found existing host DHCP lease matching {name: "embed-certs-174381", mac: "52:54:00:dd:cc:c6", ip: "192.168.39.7"}
	I0127 13:28:47.812709  426243 main.go:141] libmachine: (embed-certs-174381) reserved static IP address 192.168.39.7 for domain embed-certs-174381
	I0127 13:28:47.812722  426243 main.go:141] libmachine: (embed-certs-174381) waiting for SSH...
	I0127 13:28:47.812738  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Getting to WaitForSSH function...
	I0127 13:28:47.814915  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.815278  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:47.815308  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.815406  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Using SSH client type: external
	I0127 13:28:47.815442  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa (-rw-------)
	I0127 13:28:47.815496  426243 main.go:141] libmachine: (embed-certs-174381) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:28:47.815535  426243 main.go:141] libmachine: (embed-certs-174381) DBG | About to run SSH command:
	I0127 13:28:47.815567  426243 main.go:141] libmachine: (embed-certs-174381) DBG | exit 0
	I0127 13:28:47.947341  426243 main.go:141] libmachine: (embed-certs-174381) DBG | SSH cmd err, output: <nil>: 
	I0127 13:28:47.947817  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetConfigRaw
	I0127 13:28:47.948536  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetIP
	I0127 13:28:47.951432  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.951812  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:47.951844  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.952127  426243 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/config.json ...
	I0127 13:28:47.952369  426243 machine.go:93] provisionDockerMachine start ...
	I0127 13:28:47.952396  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:47.952632  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:47.954855  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.955239  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:47.955277  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:47.955489  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:47.955650  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:47.955793  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:47.955946  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:47.956121  426243 main.go:141] libmachine: Using SSH client type: native
	I0127 13:28:47.956339  426243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0127 13:28:47.956350  426243 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:28:48.066627  426243 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:28:48.066661  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetMachineName
	I0127 13:28:48.066897  426243 buildroot.go:166] provisioning hostname "embed-certs-174381"
	I0127 13:28:48.066918  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetMachineName
	I0127 13:28:48.067089  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:48.069479  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.069813  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:48.069841  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.069997  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:48.070200  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.070436  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.070608  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:48.070810  426243 main.go:141] libmachine: Using SSH client type: native
	I0127 13:28:48.070977  426243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0127 13:28:48.070992  426243 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-174381 && echo "embed-certs-174381" | sudo tee /etc/hostname
	I0127 13:28:48.193355  426243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-174381
	
	I0127 13:28:48.193396  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:48.196029  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.196396  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:48.196436  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.196565  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:48.196756  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.196906  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.197031  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:48.197174  426243 main.go:141] libmachine: Using SSH client type: native
	I0127 13:28:48.197351  426243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0127 13:28:48.197369  426243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-174381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-174381/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-174381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:28:48.320057  426243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:28:48.320088  426243 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:28:48.320126  426243 buildroot.go:174] setting up certificates
	I0127 13:28:48.320140  426243 provision.go:84] configureAuth start
	I0127 13:28:48.320154  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetMachineName
	I0127 13:28:48.320424  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetIP
	I0127 13:28:48.323129  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.323502  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:48.323531  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.323682  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:48.325614  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.325900  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:48.325943  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.326037  426243 provision.go:143] copyHostCerts
	I0127 13:28:48.326099  426243 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:28:48.326111  426243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:28:48.326183  426243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:28:48.326308  426243 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:28:48.326328  426243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:28:48.326359  426243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:28:48.326453  426243 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:28:48.326463  426243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:28:48.326488  426243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:28:48.326591  426243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.embed-certs-174381 san=[127.0.0.1 192.168.39.7 embed-certs-174381 localhost minikube]
	I0127 13:28:48.501157  426243 provision.go:177] copyRemoteCerts
	I0127 13:28:48.501226  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:28:48.501259  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:48.504208  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.504518  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:48.504558  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.504754  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:48.504973  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.505174  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:48.505310  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:28:48.589249  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:28:48.613754  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0127 13:28:48.637387  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:28:48.661439  426243 provision.go:87] duration metric: took 341.283009ms to configureAuth
	I0127 13:28:48.661482  426243 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:28:48.661697  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:28:48.661776  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:48.664971  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.665409  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:48.665442  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.665663  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:48.665854  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.666068  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.666235  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:48.666434  426243 main.go:141] libmachine: Using SSH client type: native
	I0127 13:28:48.666663  426243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0127 13:28:48.666680  426243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:28:48.899434  426243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:28:48.899468  426243 machine.go:96] duration metric: took 947.081625ms to provisionDockerMachine
	I0127 13:28:48.899505  426243 start.go:293] postStartSetup for "embed-certs-174381" (driver="kvm2")
	I0127 13:28:48.899517  426243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:28:48.899538  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:48.899832  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:28:48.899853  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:48.902745  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.903165  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:48.903199  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:48.903279  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:48.903479  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:48.903650  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:48.903812  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:28:48.989800  426243 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:28:48.994304  426243 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:28:48.994354  426243 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:28:48.994413  426243 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:28:48.994498  426243 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:28:48.994647  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:28:49.004431  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:28:49.028142  426243 start.go:296] duration metric: took 128.621062ms for postStartSetup
	I0127 13:28:49.028181  426243 fix.go:56] duration metric: took 19.560840188s for fixHost
	I0127 13:28:49.028203  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:49.031073  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.031410  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:49.031437  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.031592  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:49.031782  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:49.031917  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:49.032121  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:49.032301  426243 main.go:141] libmachine: Using SSH client type: native
	I0127 13:28:49.032498  426243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0127 13:28:49.032512  426243 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:28:49.143542  426243 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984529.103528786
	
	I0127 13:28:49.143570  426243 fix.go:216] guest clock: 1737984529.103528786
	I0127 13:28:49.143579  426243 fix.go:229] Guest: 2025-01-27 13:28:49.103528786 +0000 UTC Remote: 2025-01-27 13:28:49.028185263 +0000 UTC m=+31.713987868 (delta=75.343523ms)
	I0127 13:28:49.143618  426243 fix.go:200] guest clock delta is within tolerance: 75.343523ms
	I0127 13:28:49.143640  426243 start.go:83] releasing machines lock for "embed-certs-174381", held for 19.676331588s
	I0127 13:28:49.143669  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:49.143949  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetIP
	I0127 13:28:49.146557  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.146889  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:49.146916  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.147117  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:49.147759  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:49.147986  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:28:49.148126  426243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:28:49.148174  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:49.148217  426243 ssh_runner.go:195] Run: cat /version.json
	I0127 13:28:49.148257  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:28:49.150988  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.151145  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.151334  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:49.151371  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.151530  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:49.151533  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:49.151554  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:49.151718  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:28:49.151781  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:49.151880  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:28:49.151926  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:49.152015  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:28:49.152043  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:28:49.152168  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:28:49.258127  426243 ssh_runner.go:195] Run: systemctl --version
	I0127 13:28:49.264114  426243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:28:49.407830  426243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:28:49.417242  426243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:28:49.417307  426243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:28:49.440556  426243 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:28:49.440587  426243 start.go:495] detecting cgroup driver to use...
	I0127 13:28:49.440666  426243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:28:49.460127  426243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:28:49.476447  426243 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:28:49.476514  426243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:28:49.492738  426243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:28:49.506090  426243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:28:49.632091  426243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:28:49.780544  426243 docker.go:233] disabling docker service ...
	I0127 13:28:49.780623  426243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:28:49.795163  426243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:28:49.808610  426243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:28:49.966753  426243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:28:50.095449  426243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:28:50.110101  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:28:50.129635  426243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:28:50.129695  426243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:28:50.140246  426243 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:28:50.140307  426243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:28:50.150824  426243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:28:50.162856  426243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:28:50.174096  426243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:28:50.185145  426243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:28:50.195613  426243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:28:50.213398  426243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:28:50.224125  426243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:28:50.233590  426243 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:28:50.233628  426243 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:28:50.246609  426243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:28:50.255971  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:28:50.377147  426243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:28:50.473523  426243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:28:50.473603  426243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:28:50.478887  426243 start.go:563] Will wait 60s for crictl version
	I0127 13:28:50.478943  426243 ssh_runner.go:195] Run: which crictl
	I0127 13:28:50.483062  426243 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:28:50.529699  426243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:28:50.529808  426243 ssh_runner.go:195] Run: crio --version
	I0127 13:28:50.561110  426243 ssh_runner.go:195] Run: crio --version
	I0127 13:28:50.593780  426243 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:28:50.595008  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetIP
	I0127 13:28:50.597913  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:50.598364  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:28:50.598393  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:28:50.598603  426243 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0127 13:28:50.603166  426243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:28:50.618364  426243 kubeadm.go:883] updating cluster {Name:embed-certs-174381 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-174381 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:28:50.618973  426243 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:28:50.619046  426243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:28:50.657192  426243 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:28:50.657273  426243 ssh_runner.go:195] Run: which lz4
	I0127 13:28:50.662671  426243 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:28:50.667238  426243 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:28:50.667268  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 13:28:52.077207  426243 crio.go:462] duration metric: took 1.414553236s to copy over tarball
	I0127 13:28:52.077296  426243 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:28:54.296027  426243 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.218683355s)
	I0127 13:28:54.296061  426243 crio.go:469] duration metric: took 2.218824619s to extract the tarball
	I0127 13:28:54.296072  426243 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:28:54.334323  426243 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:28:54.386725  426243 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:28:54.386755  426243 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:28:54.386766  426243 kubeadm.go:934] updating node { 192.168.39.7 8443 v1.32.1 crio true true} ...
	I0127 13:28:54.386900  426243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-174381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:embed-certs-174381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:28:54.386989  426243 ssh_runner.go:195] Run: crio config
	I0127 13:28:54.447116  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:28:54.447146  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:28:54.447160  426243 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:28:54.447188  426243 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-174381 NodeName:embed-certs-174381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:28:54.447375  426243 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-174381"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.7"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:28:54.447459  426243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:28:54.462128  426243 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:28:54.462203  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:28:54.475736  426243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 13:28:54.495575  426243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:28:54.512461  426243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 13:28:54.530858  426243 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0127 13:28:54.534995  426243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:28:54.548674  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:28:54.682111  426243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:28:54.700175  426243 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381 for IP: 192.168.39.7
	I0127 13:28:54.700203  426243 certs.go:194] generating shared ca certs ...
	I0127 13:28:54.700227  426243 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:28:54.700414  426243 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:28:54.700504  426243 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:28:54.700519  426243 certs.go:256] generating profile certs ...
	I0127 13:28:54.700646  426243 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/client.key
	I0127 13:28:54.700721  426243 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/apiserver.key.e667fc08
	I0127 13:28:54.700783  426243 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/proxy-client.key
	I0127 13:28:54.700951  426243 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:28:54.700994  426243 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:28:54.701009  426243 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:28:54.701040  426243 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:28:54.701086  426243 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:28:54.701120  426243 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:28:54.701178  426243 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:28:54.701890  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:28:54.751592  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:28:54.796006  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:28:54.834169  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:28:54.879431  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0127 13:28:54.926243  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:28:54.955302  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:28:54.988628  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/embed-certs-174381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 13:28:55.017699  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:28:55.044955  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:28:55.075004  426243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:28:55.103051  426243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:28:55.122255  426243 ssh_runner.go:195] Run: openssl version
	I0127 13:28:55.139912  426243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:28:55.151707  426243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:28:55.157100  426243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:28:55.157167  426243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:28:55.164030  426243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:28:55.175335  426243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:28:55.186351  426243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:28:55.191139  426243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:28:55.191186  426243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:28:55.197214  426243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:28:55.208267  426243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:28:55.219208  426243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:28:55.223855  426243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:28:55.223904  426243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:28:55.229825  426243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:28:55.240648  426243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:28:55.245522  426243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:28:55.251810  426243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:28:55.258058  426243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:28:55.264181  426243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:28:55.270301  426243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:28:55.276134  426243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:28:55.283258  426243 kubeadm.go:392] StartCluster: {Name:embed-certs-174381 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-174381 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:28:55.283385  426243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:28:55.283474  426243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:28:55.326507  426243 cri.go:89] found id: ""
	I0127 13:28:55.326610  426243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:28:55.336519  426243 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:28:55.336534  426243 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:28:55.336578  426243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:28:55.345948  426243 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:28:55.346888  426243 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-174381" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:28:55.347370  426243 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-174381" cluster setting kubeconfig missing "embed-certs-174381" context setting]
	I0127 13:28:55.348204  426243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:28:55.349982  426243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:28:55.360304  426243 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.7
	I0127 13:28:55.360338  426243 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:28:55.360352  426243 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:28:55.360405  426243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:28:55.398728  426243 cri.go:89] found id: ""
	I0127 13:28:55.398816  426243 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:28:55.416146  426243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:28:55.428069  426243 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:28:55.428085  426243 kubeadm.go:157] found existing configuration files:
	
	I0127 13:28:55.428137  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:28:55.437033  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:28:55.437096  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:28:55.449498  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:28:55.460026  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:28:55.460080  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:28:55.470622  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.479436  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:28:55.479489  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:28:55.488484  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:28:55.497578  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:28:55.497624  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:28:55.509329  426243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:28:55.519276  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:55.641615  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:56.934207  426243 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.292545126s)
	I0127 13:28:56.934242  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:57.151022  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:57.227938  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:28:57.310204  426243 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:28:57.310318  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:28:57.810550  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:28:58.310693  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:28:58.342151  426243 api_server.go:72] duration metric: took 1.031940601s to wait for apiserver process to appear ...
	I0127 13:28:58.342191  426243 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:28:58.342218  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:28:58.342883  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0127 13:28:58.842587  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:03.843716  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 13:29:03.843790  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:08.845255  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 13:29:08.845308  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:13.846850  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 13:29:13.846927  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:18.848077  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 13:29:18.848123  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:18.997031  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": read tcp 192.168.39.1:38920->192.168.39.7:8443: read: connection reset by peer
	I0127 13:29:19.342564  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:19.343262  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0127 13:29:19.843306  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:24.843709  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 13:29:24.843762  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:29.844459  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 13:29:29.844510  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:34.845459  426243 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0127 13:29:34.845521  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:38.928679  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:29:38.928719  426243 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:29:38.928738  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:38.960648  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:38.960683  426243 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:39.343179  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:39.348122  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:39.348152  426243 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:39.843284  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:39.850043  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:29:39.850069  426243 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:29:40.342594  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:29:40.349336  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 13:29:40.357522  426243 api_server.go:141] control plane version: v1.32.1
	I0127 13:29:40.357551  426243 api_server.go:131] duration metric: took 42.015349921s to wait for apiserver health ...
	I0127 13:29:40.357563  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:29:40.357572  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:29:40.359100  426243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:29:40.360147  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:29:40.372373  426243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:29:40.391770  426243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:29:40.410677  426243 system_pods.go:59] 8 kube-system pods found
	I0127 13:29:40.410708  426243 system_pods.go:61] "coredns-668d6bf9bc-47brb" [791589a4-5df1-42bf-9f09-ee8dd6f68573] Running
	I0127 13:29:40.410715  426243 system_pods.go:61] "etcd-embed-certs-174381" [d1c28619-e586-447a-8310-57c1de51d625] Running
	I0127 13:29:40.410721  426243 system_pods.go:61] "kube-apiserver-embed-certs-174381" [67b2a7d3-e58a-430c-bc33-ef5354421095] Running
	I0127 13:29:40.410727  426243 system_pods.go:61] "kube-controller-manager-embed-certs-174381" [9078c51e-f265-4f76-ba7e-61d35ee57f7d] Running
	I0127 13:29:40.410731  426243 system_pods.go:61] "kube-proxy-mlv7g" [25fa7517-8d85-44e0-86fd-8e191155e2b4] Running
	I0127 13:29:40.410737  426243 system_pods.go:61] "kube-scheduler-embed-certs-174381" [022df4ce-9c81-4763-9f31-2db5e222d021] Running
	I0127 13:29:40.410747  426243 system_pods.go:61] "metrics-server-f79f97bbb-rbth5" [85fd0ed4-3d28-43cd-8f41-56edfee2d91a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:29:40.410759  426243 system_pods.go:61] "storage-provisioner" [3996f24e-064f-489b-8eb8-c414cf7df465] Running
	I0127 13:29:40.410768  426243 system_pods.go:74] duration metric: took 18.978088ms to wait for pod list to return data ...
	I0127 13:29:40.410778  426243 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:29:40.417071  426243 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:29:40.417099  426243 node_conditions.go:123] node cpu capacity is 2
	I0127 13:29:40.417113  426243 node_conditions.go:105] duration metric: took 6.329479ms to run NodePressure ...
	I0127 13:29:40.417155  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:29:40.720704  426243 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 13:29:40.726849  426243 retry.go:31] will retry after 324.757634ms: kubelet not initialised
	I0127 13:29:41.056219  426243 retry.go:31] will retry after 212.00572ms: kubelet not initialised
	I0127 13:29:41.272580  426243 retry.go:31] will retry after 842.29021ms: kubelet not initialised
	I0127 13:29:42.123243  426243 kubeadm.go:739] kubelet initialised
	I0127 13:29:42.123274  426243 kubeadm.go:740] duration metric: took 1.402542004s waiting for restarted kubelet to initialise ...
	I0127 13:29:42.123287  426243 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:29:42.134154  426243 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace to be "Ready" ...
	I0127 13:29:44.141900  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:46.641587  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:49.140315  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:51.640564  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:53.640875  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:56.140212  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:29:58.140722  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:00.140776  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:02.640893  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:05.141170  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:07.642057  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:10.140256  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:12.657336  426243 pod_ready.go:103] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:14.640946  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace has status "Ready":"True"
	I0127 13:30:14.640978  426243 pod_ready.go:82] duration metric: took 32.506787887s for pod "coredns-668d6bf9bc-47brb" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.640992  426243 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.645748  426243 pod_ready.go:93] pod "etcd-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:30:14.645770  426243 pod_ready.go:82] duration metric: took 4.77051ms for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.645779  426243 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.650970  426243 pod_ready.go:93] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:30:14.651041  426243 pod_ready.go:82] duration metric: took 5.253972ms for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.651055  426243 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.657436  426243 pod_ready.go:93] pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:30:14.657459  426243 pod_ready.go:82] duration metric: took 6.394912ms for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.657471  426243 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mlv7g" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.662378  426243 pod_ready.go:93] pod "kube-proxy-mlv7g" in "kube-system" namespace has status "Ready":"True"
	I0127 13:30:14.662398  426243 pod_ready.go:82] duration metric: took 4.920193ms for pod "kube-proxy-mlv7g" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:14.662414  426243 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:15.038514  426243 pod_ready.go:93] pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:30:15.038563  426243 pod_ready.go:82] duration metric: took 376.138665ms for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:15.038577  426243 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace to be "Ready" ...
	I0127 13:30:17.045661  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:19.045740  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:21.045997  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:23.046335  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:25.046494  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:27.547487  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:30.045577  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:32.045880  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:34.045990  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:36.545596  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:38.545944  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:41.046514  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:43.046659  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:45.544678  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:47.547332  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:50.046262  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:52.545162  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:54.545988  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:57.046211  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:30:59.545859  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:02.045283  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:04.046246  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:06.545835  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:08.546287  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:11.045309  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:13.544734  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:16.045249  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:18.545618  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:21.044535  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:23.046305  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:25.545990  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:28.044570  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:30.045307  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:32.046066  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:34.545661  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:37.044330  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:39.545564  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:42.045513  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:44.545173  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:46.545329  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:49.044502  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:51.046043  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:53.545313  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:56.047779  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:31:58.545200  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:01.044747  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:03.544763  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:05.545987  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:08.045074  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:10.546523  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:13.045130  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:15.544806  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:17.547325  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:20.045312  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:22.045521  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:24.545930  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:27.045335  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:29.046298  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:31.046590  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:33.545254  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:35.545642  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:37.546781  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:40.044893  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:42.543923  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:44.545389  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:47.044839  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:49.045297  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:51.046600  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:53.055496  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:55.545905  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:32:57.546017  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:00.045744  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:02.047433  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:04.549068  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:07.046016  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:09.549421  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:12.044599  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:14.045490  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:16.046005  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:18.546135  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:21.044263  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:23.045030  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:25.046597  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:27.545558  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:29.545661  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:32.045147  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:34.046135  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:36.545781  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:39.045475  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:41.543913  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:43.545200  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:45.546605  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:48.045115  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:50.045597  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:52.045646  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:54.546429  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:57.045005  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:33:59.045974  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:01.545057  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:03.545249  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:05.548661  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:08.045677  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:10.544110  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:12.544951  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:14.545184  426243 pod_ready.go:103] pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace has status "Ready":"False"
	I0127 13:34:15.038750  426243 pod_ready.go:82] duration metric: took 4m0.000153516s for pod "metrics-server-f79f97bbb-rbth5" in "kube-system" namespace to be "Ready" ...
	E0127 13:34:15.038799  426243 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0127 13:34:15.038820  426243 pod_ready.go:39] duration metric: took 4m32.915517913s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:34:15.038855  426243 kubeadm.go:597] duration metric: took 5m19.702314984s to restartPrimaryControlPlane
	W0127 13:34:15.038927  426243 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:34:15.038963  426243 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:34:43.008901  426243 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.969904953s)
	I0127 13:34:43.008991  426243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:34:43.028450  426243 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:34:43.051715  426243 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:34:43.071414  426243 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:34:43.071443  426243 kubeadm.go:157] found existing configuration files:
	
	I0127 13:34:43.071500  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:34:43.106644  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:34:43.106710  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:34:43.119642  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:34:43.139116  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:34:43.139183  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:34:43.160129  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:34:43.170134  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:34:43.170228  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:34:43.181131  426243 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:34:43.191233  426243 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:34:43.191288  426243 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:34:43.201907  426243 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:34:43.390681  426243 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:34:52.576263  426243 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:34:52.576356  426243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:52.576423  426243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:52.576582  426243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:52.576704  426243 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:34:52.576783  426243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:52.578299  426243 out.go:235]   - Generating certificates and keys ...
	I0127 13:34:52.578380  426243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:34:52.578439  426243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:34:52.578509  426243 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:34:52.578594  426243 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:34:52.578701  426243 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:34:52.578757  426243 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:34:52.578818  426243 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:34:52.578870  426243 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:34:52.578962  426243 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:34:52.579063  426243 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:34:52.579111  426243 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:34:52.579164  426243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:34:52.579227  426243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:34:52.579282  426243 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:34:52.579333  426243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:34:52.579387  426243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:34:52.579449  426243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:34:52.579519  426243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:34:52.579604  426243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:34:52.581730  426243 out.go:235]   - Booting up control plane ...
	I0127 13:34:52.581854  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:34:52.581961  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:34:52.582058  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:34:52.582184  426243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:34:52.582253  426243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:34:52.582290  426243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:34:52.582417  426243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:34:52.582554  426243 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:34:52.582651  426243 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002999225s
	I0127 13:34:52.582795  426243 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:34:52.582903  426243 kubeadm.go:310] [api-check] The API server is healthy after 5.501149453s
	I0127 13:34:52.583076  426243 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:34:52.583258  426243 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:34:52.583323  426243 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:34:52.583591  426243 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-174381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:34:52.583679  426243 kubeadm.go:310] [bootstrap-token] Using token: 5hn0ox.etnk5twofkqgha4f
	I0127 13:34:52.584876  426243 out.go:235]   - Configuring RBAC rules ...
	I0127 13:34:52.585016  426243 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:34:52.585138  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:34:52.585329  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:34:52.585515  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:34:52.585645  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:34:52.585730  426243 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:34:52.585829  426243 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:34:52.585867  426243 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:34:52.585911  426243 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:34:52.585917  426243 kubeadm.go:310] 
	I0127 13:34:52.585967  426243 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:34:52.585973  426243 kubeadm.go:310] 
	I0127 13:34:52.586066  426243 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:34:52.586082  426243 kubeadm.go:310] 
	I0127 13:34:52.586138  426243 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:34:52.586214  426243 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:34:52.586295  426243 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:34:52.586319  426243 kubeadm.go:310] 
	I0127 13:34:52.586416  426243 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:34:52.586463  426243 kubeadm.go:310] 
	I0127 13:34:52.586522  426243 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:34:52.586532  426243 kubeadm.go:310] 
	I0127 13:34:52.586628  426243 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:34:52.586712  426243 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:34:52.586770  426243 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:34:52.586777  426243 kubeadm.go:310] 
	I0127 13:34:52.586857  426243 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:34:52.586926  426243 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:34:52.586932  426243 kubeadm.go:310] 
	I0127 13:34:52.587010  426243 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587095  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 13:34:52.587119  426243 kubeadm.go:310] 	--control-plane 
	I0127 13:34:52.587125  426243 kubeadm.go:310] 
	I0127 13:34:52.587196  426243 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:34:52.587204  426243 kubeadm.go:310] 
	I0127 13:34:52.587272  426243 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587400  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 13:34:52.587418  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:34:52.587432  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:52.588976  426243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:34:52.590276  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:34:52.604204  426243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:34:52.631515  426243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:34:52.631609  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:52.631702  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-174381 minikube.k8s.io/updated_at=2025_01_27T13_34_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-174381 minikube.k8s.io/primary=true
	I0127 13:34:52.663541  426243 ops.go:34] apiserver oom_adj: -16
	I0127 13:34:52.870691  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.371756  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.871386  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.371644  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.871179  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.370747  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.871458  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.371676  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.870824  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.982232  426243 kubeadm.go:1113] duration metric: took 4.350694221s to wait for elevateKubeSystemPrivileges
	I0127 13:34:56.982281  426243 kubeadm.go:394] duration metric: took 6m1.699030467s to StartCluster
	I0127 13:34:56.982314  426243 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.982426  426243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:56.983746  426243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.984032  426243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:34:56.984111  426243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:34:56.984230  426243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-174381"
	I0127 13:34:56.984249  426243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-174381"
	W0127 13:34:56.984258  426243 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:34:56.984273  426243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-174381"
	I0127 13:34:56.984292  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984300  426243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-174381"
	I0127 13:34:56.984303  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:56.984359  426243 addons.go:69] Setting dashboard=true in profile "embed-certs-174381"
	I0127 13:34:56.984372  426243 addons.go:238] Setting addon dashboard=true in "embed-certs-174381"
	W0127 13:34:56.984381  426243 addons.go:247] addon dashboard should already be in state true
	I0127 13:34:56.984405  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984450  426243 addons.go:69] Setting metrics-server=true in profile "embed-certs-174381"
	I0127 13:34:56.984484  426243 addons.go:238] Setting addon metrics-server=true in "embed-certs-174381"
	W0127 13:34:56.984494  426243 addons.go:247] addon metrics-server should already be in state true
	I0127 13:34:56.984524  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984760  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984778  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984799  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984801  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984812  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984826  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984943  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984977  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.986354  426243 out.go:177] * Verifying Kubernetes components...
	I0127 13:34:56.988314  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:34:57.003008  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0127 13:34:57.003716  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.003737  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0127 13:34:57.004011  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0127 13:34:57.004163  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004169  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0127 13:34:57.004457  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004482  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.004559  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004638  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004651  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004670  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005012  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005085  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005111  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005198  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.005324  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005340  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005955  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005969  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005970  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.006577  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006617  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.006912  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006964  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.007601  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.007633  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.009217  426243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-174381"
	W0127 13:34:57.009239  426243 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:34:57.009268  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:57.009605  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.009648  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.027242  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0127 13:34:57.027495  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0127 13:34:57.027644  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.027844  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.028181  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028198  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028301  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028318  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028539  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028633  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.028808  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.029068  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0127 13:34:57.029543  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.030162  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.030190  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.030581  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.030601  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031166  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.031207  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.031430  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031637  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0127 13:34:57.031993  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.032625  426243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:34:57.032750  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.032765  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.033302  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.033477  426243 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:34:57.033498  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.033587  426243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.033607  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:34:57.033627  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.035541  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.035761  426243 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036794  426243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036804  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:34:57.036814  426243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:34:57.036833  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.037349  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.037808  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:34:57.037827  426243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:34:57.037856  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.038015  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.038042  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.038208  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.038375  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.038561  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.038701  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.041035  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041500  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.041519  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041915  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042008  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042189  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042254  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.042272  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042592  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042583  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.042727  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042852  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.055810  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0127 13:34:57.056237  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.056772  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.056801  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.057165  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.057501  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.059165  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.059398  426243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.059418  426243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:34:57.059437  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.062703  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063236  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.063266  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063369  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.063544  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.063694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.063831  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.242347  426243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:34:57.326178  426243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352801  426243 node_ready.go:49] node "embed-certs-174381" has status "Ready":"True"
	I0127 13:34:57.352828  426243 node_ready.go:38] duration metric: took 26.613856ms for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352841  426243 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:34:57.368293  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:57.372941  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:34:57.372962  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:34:57.391676  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.418587  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:34:57.418616  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:34:57.446588  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.460844  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:34:57.460869  426243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:34:57.507947  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:34:57.507976  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:34:57.542669  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.542701  426243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:34:57.630641  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:34:57.630672  426243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:34:57.639506  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.693463  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:34:57.693498  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:34:57.806045  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:34:57.806082  426243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:34:57.930058  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:34:57.930101  426243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:34:58.055263  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:34:58.055295  426243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:34:58.110576  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:34:58.110609  426243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:34:58.202270  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.202305  426243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:34:58.293311  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.514356  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067720868s)
	I0127 13:34:58.514435  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514450  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.514846  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.514876  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.514894  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.514909  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514920  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.515161  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.515197  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.515860  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516243  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124532885s)
	I0127 13:34:58.516270  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516281  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.516739  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.516757  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.516768  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516776  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516787  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.517207  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.517230  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549206  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.549228  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.549614  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.549638  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549648  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260116  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.620545789s)
	I0127 13:34:59.260244  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260271  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.260620  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260713  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.260730  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.260746  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260761  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.261011  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.261041  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.261061  426243 addons.go:479] Verifying addon metrics-server=true in "embed-certs-174381"
	I0127 13:34:59.395546  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.395572  426243 pod_ready.go:82] duration metric: took 2.027244475s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.395586  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407673  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.407695  426243 pod_ready.go:82] duration metric: took 12.102291ms for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407705  426243 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417168  426243 pod_ready.go:93] pod "etcd-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.417190  426243 pod_ready.go:82] duration metric: took 9.47928ms for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417199  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:00.168433  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.875044372s)
	I0127 13:35:00.168496  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.168520  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.168866  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.170590  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.170645  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.170666  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.170673  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.171042  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.171132  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.171105  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.172686  426243 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-174381 addons enable metrics-server
	
	I0127 13:35:00.174376  426243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:35:00.175566  426243 addons.go:514] duration metric: took 3.191465201s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:35:01.424773  426243 pod_ready.go:103] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"False"
	I0127 13:35:01.924012  426243 pod_ready.go:93] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:01.924044  426243 pod_ready.go:82] duration metric: took 2.506836977s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:01.924057  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.931062  426243 pod_ready.go:93] pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.931095  426243 pod_ready.go:82] duration metric: took 1.007026875s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.931108  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936917  426243 pod_ready.go:93] pod "kube-proxy-cjsf9" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.936945  426243 pod_ready.go:82] duration metric: took 5.828276ms for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936957  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444155  426243 pod_ready.go:93] pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:03.444192  426243 pod_ready.go:82] duration metric: took 507.225554ms for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444203  426243 pod_ready.go:39] duration metric: took 6.091349359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:35:03.444226  426243 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:03.444294  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:03.488162  426243 api_server.go:72] duration metric: took 6.504085901s to wait for apiserver process to appear ...
	I0127 13:35:03.488197  426243 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:03.488224  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:35:03.493586  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 13:35:03.494867  426243 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:03.494894  426243 api_server.go:131] duration metric: took 6.689991ms to wait for apiserver health ...
	I0127 13:35:03.494903  426243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:03.575835  426243 system_pods.go:59] 9 kube-system pods found
	I0127 13:35:03.575871  426243 system_pods.go:61] "coredns-668d6bf9bc-9ncnm" [8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7] Running
	I0127 13:35:03.575877  426243 system_pods.go:61] "coredns-668d6bf9bc-hjncm" [68641e50-9f99-4811-9752-c7dc0db47502] Running
	I0127 13:35:03.575881  426243 system_pods.go:61] "etcd-embed-certs-174381" [fc5cb0ba-724d-4b3d-a6d0-65644ed57d99] Running
	I0127 13:35:03.575886  426243 system_pods.go:61] "kube-apiserver-embed-certs-174381" [7afdc2d3-86bd-480d-a081-e1475ff21346] Running
	I0127 13:35:03.575890  426243 system_pods.go:61] "kube-controller-manager-embed-certs-174381" [fa410171-2b30-4c79-97d4-87c1549fd75c] Running
	I0127 13:35:03.575894  426243 system_pods.go:61] "kube-proxy-cjsf9" [c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6] Running
	I0127 13:35:03.575901  426243 system_pods.go:61] "kube-scheduler-embed-certs-174381" [ab92b381-fb78-4aa1-bc55-4e47a58f2c32] Running
	I0127 13:35:03.575908  426243 system_pods.go:61] "metrics-server-f79f97bbb-hxlwf" [cb779c78-85f9-48e7-88c3-f087f57547e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:03.575913  426243 system_pods.go:61] "storage-provisioner" [3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1] Running
	I0127 13:35:03.575922  426243 system_pods.go:74] duration metric: took 81.012821ms to wait for pod list to return data ...
	I0127 13:35:03.575931  426243 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:03.772597  426243 default_sa.go:45] found service account: "default"
	I0127 13:35:03.772641  426243 default_sa.go:55] duration metric: took 196.700969ms for default service account to be created ...
	I0127 13:35:03.772655  426243 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:35:03.976966  426243 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-174381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174381 -n embed-certs-174381
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-174381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-174381 logs -n 25: (1.438736625s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-441438       | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-174381                 | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-174381                                  | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-838260        | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-838260             | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-441438                           | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-639843             | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-639843                  | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-639843 image list                           | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	| delete  | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:54 UTC | 27 Jan 25 13:54 UTC |
	| delete  | -p no-preload-563155                                   | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:54 UTC | 27 Jan 25 13:54 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:34:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:34:50.343590  429070 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:34:50.343706  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343717  429070 out.go:358] Setting ErrFile to fd 2...
	I0127 13:34:50.343725  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343905  429070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:34:50.344540  429070 out.go:352] Setting JSON to false
	I0127 13:34:50.345553  429070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22630,"bootTime":1737962260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:34:50.345705  429070 start.go:139] virtualization: kvm guest
	I0127 13:34:50.348432  429070 out.go:177] * [newest-cni-639843] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:34:50.349607  429070 notify.go:220] Checking for updates...
	I0127 13:34:50.349639  429070 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:34:50.350877  429070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:34:50.352137  429070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:50.353523  429070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:34:50.354936  429070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:34:50.356253  429070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:34:50.358120  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:50.358577  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.358648  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.375344  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 13:34:50.375770  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.376385  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.376429  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.376809  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.377061  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.377398  429070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:34:50.377833  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.377889  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.393490  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0127 13:34:50.393954  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.394574  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.394602  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.394931  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.395175  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.432045  429070 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:34:50.433260  429070 start.go:297] selected driver: kvm2
	I0127 13:34:50.433295  429070 start.go:901] validating driver "kvm2" against &{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.433450  429070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:34:50.434521  429070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.434662  429070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:34:50.455080  429070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:34:50.455695  429070 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:34:50.455755  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:34:50.455835  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:50.455908  429070 start.go:340] cluster config:
	{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.456092  429070 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.457706  429070 out.go:177] * Starting "newest-cni-639843" primary control-plane node in "newest-cni-639843" cluster
	I0127 13:34:50.458857  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:34:50.458907  429070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:34:50.458924  429070 cache.go:56] Caching tarball of preloaded images
	I0127 13:34:50.459033  429070 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:34:50.459049  429070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:34:50.459193  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:34:50.459403  429070 start.go:360] acquireMachinesLock for newest-cni-639843: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:34:50.459457  429070 start.go:364] duration metric: took 33.893µs to acquireMachinesLock for "newest-cni-639843"
	I0127 13:34:50.459478  429070 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:34:50.459488  429070 fix.go:54] fixHost starting: 
	I0127 13:34:50.459761  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.459807  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.475245  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0127 13:34:50.475743  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.476455  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.476504  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.476932  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.477227  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.477420  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:34:50.479725  429070 fix.go:112] recreateIfNeeded on newest-cni-639843: state=Stopped err=<nil>
	I0127 13:34:50.479768  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	W0127 13:34:50.479933  429070 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:34:50.481457  429070 out.go:177] * Restarting existing kvm2 VM for "newest-cni-639843" ...
	I0127 13:34:48.302747  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:48.321834  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:48.321899  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:48.370678  427154 cri.go:89] found id: ""
	I0127 13:34:48.370716  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.370732  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:48.370741  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:48.370813  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:48.430514  427154 cri.go:89] found id: ""
	I0127 13:34:48.430655  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.430683  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:48.430702  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:48.430826  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:48.477908  427154 cri.go:89] found id: ""
	I0127 13:34:48.477941  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.477954  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:48.477962  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:48.478036  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:48.532193  427154 cri.go:89] found id: ""
	I0127 13:34:48.532230  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.532242  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:48.532250  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:48.532316  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:48.580627  427154 cri.go:89] found id: ""
	I0127 13:34:48.580658  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.580667  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:48.580673  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:48.580744  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:48.620393  427154 cri.go:89] found id: ""
	I0127 13:34:48.620428  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.620441  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:48.620449  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:48.620518  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:48.662032  427154 cri.go:89] found id: ""
	I0127 13:34:48.662071  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.662079  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:48.662097  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:48.662164  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:48.699662  427154 cri.go:89] found id: ""
	I0127 13:34:48.699697  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.699709  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:48.699723  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:48.699745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:48.752100  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:48.752134  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:48.768121  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:48.768167  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:48.838690  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:48.838718  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:48.838734  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:48.928433  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:48.928471  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:52.576263  426243 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:34:52.576356  426243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:52.576423  426243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:52.576582  426243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:52.576704  426243 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:34:52.576783  426243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:52.578299  426243 out.go:235]   - Generating certificates and keys ...
	I0127 13:34:52.578380  426243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:34:52.578439  426243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:34:52.578509  426243 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:34:52.578594  426243 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:34:52.578701  426243 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:34:52.578757  426243 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:34:52.578818  426243 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:34:52.578870  426243 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:34:52.578962  426243 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:34:52.579063  426243 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:34:52.579111  426243 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:34:52.579164  426243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:34:52.579227  426243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:34:52.579282  426243 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:34:52.579333  426243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:34:52.579387  426243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:34:52.579449  426243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:34:52.579519  426243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:34:52.579604  426243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:34:52.581730  426243 out.go:235]   - Booting up control plane ...
	I0127 13:34:52.581854  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:34:52.581961  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:34:52.582058  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:34:52.582184  426243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:34:52.582253  426243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:34:52.582290  426243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:34:52.582417  426243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:34:52.582554  426243 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:34:52.582651  426243 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002999225s
	I0127 13:34:52.582795  426243 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:34:52.582903  426243 kubeadm.go:310] [api-check] The API server is healthy after 5.501149453s
	I0127 13:34:52.583076  426243 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:34:52.583258  426243 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:34:52.583323  426243 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:34:52.583591  426243 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-174381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:34:52.583679  426243 kubeadm.go:310] [bootstrap-token] Using token: 5hn0ox.etnk5twofkqgha4f
	I0127 13:34:52.584876  426243 out.go:235]   - Configuring RBAC rules ...
	I0127 13:34:52.585016  426243 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:34:52.585138  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:34:52.585329  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:34:52.585515  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:34:52.585645  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:34:52.585730  426243 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:34:52.585829  426243 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:34:52.585867  426243 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:34:52.585911  426243 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:34:52.585917  426243 kubeadm.go:310] 
	I0127 13:34:52.585967  426243 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:34:52.585973  426243 kubeadm.go:310] 
	I0127 13:34:52.586066  426243 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:34:52.586082  426243 kubeadm.go:310] 
	I0127 13:34:52.586138  426243 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:34:52.586214  426243 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:34:52.586295  426243 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:34:52.586319  426243 kubeadm.go:310] 
	I0127 13:34:52.586416  426243 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:34:52.586463  426243 kubeadm.go:310] 
	I0127 13:34:52.586522  426243 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:34:52.586532  426243 kubeadm.go:310] 
	I0127 13:34:52.586628  426243 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:34:52.586712  426243 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:34:52.586770  426243 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:34:52.586777  426243 kubeadm.go:310] 
	I0127 13:34:52.586857  426243 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:34:52.586926  426243 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:34:52.586932  426243 kubeadm.go:310] 
	I0127 13:34:52.587010  426243 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587095  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 13:34:52.587119  426243 kubeadm.go:310] 	--control-plane 
	I0127 13:34:52.587125  426243 kubeadm.go:310] 
	I0127 13:34:52.587196  426243 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:34:52.587204  426243 kubeadm.go:310] 
	I0127 13:34:52.587272  426243 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587400  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 13:34:52.587418  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:34:52.587432  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:52.588976  426243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:34:50.482735  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Start
	I0127 13:34:50.482923  429070 main.go:141] libmachine: (newest-cni-639843) starting domain...
	I0127 13:34:50.482942  429070 main.go:141] libmachine: (newest-cni-639843) ensuring networks are active...
	I0127 13:34:50.483967  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network default is active
	I0127 13:34:50.484412  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network mk-newest-cni-639843 is active
	I0127 13:34:50.484881  429070 main.go:141] libmachine: (newest-cni-639843) getting domain XML...
	I0127 13:34:50.485667  429070 main.go:141] libmachine: (newest-cni-639843) creating domain...
	I0127 13:34:51.790885  429070 main.go:141] libmachine: (newest-cni-639843) waiting for IP...
	I0127 13:34:51.792240  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:51.793056  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:51.793082  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:51.792897  429104 retry.go:31] will retry after 310.654811ms: waiting for domain to come up
	I0127 13:34:52.105667  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.106457  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.106639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.106581  429104 retry.go:31] will retry after 280.140783ms: waiting for domain to come up
	I0127 13:34:52.388057  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.388616  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.388639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.388575  429104 retry.go:31] will retry after 317.414736ms: waiting for domain to come up
	I0127 13:34:52.708208  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.708845  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.708880  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.708795  429104 retry.go:31] will retry after 475.980482ms: waiting for domain to come up
	I0127 13:34:53.186613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.187252  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.187320  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.187240  429104 retry.go:31] will retry after 619.306112ms: waiting for domain to come up
	I0127 13:34:53.807794  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.808436  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.808485  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.808365  429104 retry.go:31] will retry after 838.158661ms: waiting for domain to come up
	I0127 13:34:54.647849  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:54.648442  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:54.648465  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:54.648411  429104 retry.go:31] will retry after 739.028542ms: waiting for domain to come up
	I0127 13:34:51.475609  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:51.489500  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:51.489579  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:51.536219  427154 cri.go:89] found id: ""
	I0127 13:34:51.536250  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.536262  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:51.536270  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:51.536334  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:51.577494  427154 cri.go:89] found id: ""
	I0127 13:34:51.577522  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.577536  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:51.577543  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:51.577606  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:51.614430  427154 cri.go:89] found id: ""
	I0127 13:34:51.614463  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.614476  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:51.614484  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:51.614602  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:51.666530  427154 cri.go:89] found id: ""
	I0127 13:34:51.666582  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.666591  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:51.666597  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:51.666653  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:51.705538  427154 cri.go:89] found id: ""
	I0127 13:34:51.705567  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.705579  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:51.705587  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:51.705645  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:51.743604  427154 cri.go:89] found id: ""
	I0127 13:34:51.743638  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.743650  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:51.743658  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:51.743721  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:51.778029  427154 cri.go:89] found id: ""
	I0127 13:34:51.778058  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.778070  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:51.778078  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:51.778148  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:51.819260  427154 cri.go:89] found id: ""
	I0127 13:34:51.819294  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.819307  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:51.819321  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:51.819338  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:51.887511  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:51.887552  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:51.904227  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:51.904261  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:51.980655  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:51.980684  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:51.980699  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:52.085922  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:52.085973  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:54.642029  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:54.655922  427154 kubeadm.go:597] duration metric: took 4m4.240008337s to restartPrimaryControlPlane
	W0127 13:34:54.656192  427154 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:34:54.656244  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:34:52.590276  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:34:52.604204  426243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:34:52.631515  426243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:34:52.631609  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:52.631702  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-174381 minikube.k8s.io/updated_at=2025_01_27T13_34_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-174381 minikube.k8s.io/primary=true
	I0127 13:34:52.663541  426243 ops.go:34] apiserver oom_adj: -16
	I0127 13:34:52.870691  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.371756  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.871386  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.371644  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.871179  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.370747  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.871458  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.371676  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.870824  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.982232  426243 kubeadm.go:1113] duration metric: took 4.350694221s to wait for elevateKubeSystemPrivileges
	I0127 13:34:56.982281  426243 kubeadm.go:394] duration metric: took 6m1.699030467s to StartCluster
	I0127 13:34:56.982314  426243 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.982426  426243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:56.983746  426243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.984032  426243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:34:56.984111  426243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:34:56.984230  426243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-174381"
	I0127 13:34:56.984249  426243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-174381"
	W0127 13:34:56.984258  426243 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:34:56.984273  426243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-174381"
	I0127 13:34:56.984292  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984300  426243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-174381"
	I0127 13:34:56.984303  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:56.984359  426243 addons.go:69] Setting dashboard=true in profile "embed-certs-174381"
	I0127 13:34:56.984372  426243 addons.go:238] Setting addon dashboard=true in "embed-certs-174381"
	W0127 13:34:56.984381  426243 addons.go:247] addon dashboard should already be in state true
	I0127 13:34:56.984405  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984450  426243 addons.go:69] Setting metrics-server=true in profile "embed-certs-174381"
	I0127 13:34:56.984484  426243 addons.go:238] Setting addon metrics-server=true in "embed-certs-174381"
	W0127 13:34:56.984494  426243 addons.go:247] addon metrics-server should already be in state true
	I0127 13:34:56.984524  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984760  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984778  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984799  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984801  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984812  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984826  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984943  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984977  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.986354  426243 out.go:177] * Verifying Kubernetes components...
	I0127 13:34:56.988314  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:34:57.003008  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0127 13:34:57.003716  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.003737  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0127 13:34:57.004011  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0127 13:34:57.004163  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004169  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0127 13:34:57.004457  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004482  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.004559  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004638  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004651  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004670  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005012  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005085  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005111  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005198  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.005324  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005340  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005955  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005969  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005970  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.006577  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006617  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.006912  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006964  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.007601  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.007633  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.009217  426243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-174381"
	W0127 13:34:57.009239  426243 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:34:57.009268  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:57.009605  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.009648  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.027242  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0127 13:34:57.027495  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0127 13:34:57.027644  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.027844  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.028181  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028198  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028301  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028318  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028539  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028633  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.028808  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.029068  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0127 13:34:57.029543  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.030162  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.030190  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.030581  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.030601  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031166  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.031207  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.031430  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031637  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0127 13:34:57.031993  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.032625  426243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:34:57.032750  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.032765  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.033302  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.033477  426243 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:34:57.033498  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.033587  426243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.033607  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:34:57.033627  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.035541  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.035761  426243 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036794  426243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036804  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:34:57.036814  426243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:34:57.036833  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.037349  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.037808  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:34:57.037827  426243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:34:57.037856  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.038015  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.038042  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.038208  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.038375  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.038561  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.038701  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.041035  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041500  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.041519  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041915  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042008  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042189  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042254  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.042272  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042592  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042583  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.042727  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042852  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.055810  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0127 13:34:57.056237  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.056772  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.056801  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.057165  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.057501  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.059165  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.059398  426243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.059418  426243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:34:57.059437  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.062703  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063236  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.063266  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063369  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.063544  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.063694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.063831  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.242347  426243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:34:57.326178  426243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352801  426243 node_ready.go:49] node "embed-certs-174381" has status "Ready":"True"
	I0127 13:34:57.352828  426243 node_ready.go:38] duration metric: took 26.613856ms for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352841  426243 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:34:57.368293  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:57.372941  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:34:57.372962  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:34:57.391676  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.418587  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:34:57.418616  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:34:57.446588  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.460844  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:34:57.460869  426243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:34:57.507947  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:34:57.507976  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:34:57.542669  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.542701  426243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:34:57.630641  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:34:57.630672  426243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:34:57.639506  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.693463  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:34:57.693498  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:34:57.806045  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:34:57.806082  426243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:34:57.930058  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:34:57.930101  426243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:34:58.055263  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:34:58.055295  426243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:34:58.110576  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:34:58.110609  426243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:34:58.202270  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.202305  426243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:34:58.293311  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.514356  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067720868s)
	I0127 13:34:58.514435  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514450  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.514846  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.514876  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.514894  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.514909  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514920  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.515161  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.515197  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.515860  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516243  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124532885s)
	I0127 13:34:58.516270  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516281  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.516739  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.516757  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.516768  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516776  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516787  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.517207  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.517230  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549206  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.549228  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.549614  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.549638  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549648  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260116  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.620545789s)
	I0127 13:34:59.260244  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260271  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.260620  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260713  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.260730  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.260746  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260761  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.261011  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.261041  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.261061  426243 addons.go:479] Verifying addon metrics-server=true in "embed-certs-174381"
	I0127 13:34:59.395546  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.395572  426243 pod_ready.go:82] duration metric: took 2.027244475s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.395586  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407673  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.407695  426243 pod_ready.go:82] duration metric: took 12.102291ms for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407705  426243 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417168  426243 pod_ready.go:93] pod "etcd-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.417190  426243 pod_ready.go:82] duration metric: took 9.47928ms for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417199  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:00.168433  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.875044372s)
	I0127 13:35:00.168496  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.168520  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.168866  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.170590  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.170645  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.170666  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.170673  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.171042  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.171132  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.171105  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.172686  426243 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-174381 addons enable metrics-server
	
	I0127 13:35:00.174376  426243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:34:59.517968  427154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.861694115s)
	I0127 13:34:59.518062  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:34:59.536180  427154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:34:59.547986  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:34:59.561566  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:34:59.561591  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:34:59.561645  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:34:59.574802  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:34:59.574872  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:34:59.588185  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:34:59.598292  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:34:59.598356  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:34:59.608921  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.621764  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:34:59.621825  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.635526  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:34:59.646582  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:34:59.646644  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:34:59.657975  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:34:59.745239  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:34:59.745337  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:59.946676  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:59.946890  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:59.947050  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:35:00.183580  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:55.388471  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:55.388933  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:55.388964  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:55.388914  429104 retry.go:31] will retry after 1.346738272s: waiting for domain to come up
	I0127 13:34:56.737433  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:56.738024  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:56.738081  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:56.738007  429104 retry.go:31] will retry after 1.120347472s: waiting for domain to come up
	I0127 13:34:57.860265  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:57.860912  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:57.860943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:57.860882  429104 retry.go:31] will retry after 2.152534572s: waiting for domain to come up
	I0127 13:35:00.015953  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:00.016579  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:00.016613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:00.016544  429104 retry.go:31] will retry after 2.588698804s: waiting for domain to come up
	I0127 13:35:00.184950  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:35:00.185049  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:35:00.185140  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:35:00.185334  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:35:00.185435  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:35:00.186094  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:35:00.186301  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:35:00.187022  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:35:00.187455  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:35:00.187928  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:35:00.188334  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:35:00.188531  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:35:00.188608  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:35:00.344156  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:35:00.836083  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:35:00.964664  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:35:01.072929  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:35:01.092946  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:35:01.097538  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:35:01.097961  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:35:01.292953  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:35:00.175566  426243 addons.go:514] duration metric: took 3.191465201s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:35:01.424773  426243 pod_ready.go:103] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"False"
	I0127 13:35:01.924012  426243 pod_ready.go:93] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:01.924044  426243 pod_ready.go:82] duration metric: took 2.506836977s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:01.924057  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.607848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:02.608639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:02.608669  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:02.608620  429104 retry.go:31] will retry after 2.763044938s: waiting for domain to come up
	I0127 13:35:01.294375  427154 out.go:235]   - Booting up control plane ...
	I0127 13:35:01.294569  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:35:01.306014  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:35:01.309847  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:35:01.310062  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:35:01.312436  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:35:02.931062  426243 pod_ready.go:93] pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.931095  426243 pod_ready.go:82] duration metric: took 1.007026875s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.931108  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936917  426243 pod_ready.go:93] pod "kube-proxy-cjsf9" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.936945  426243 pod_ready.go:82] duration metric: took 5.828276ms for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936957  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444155  426243 pod_ready.go:93] pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:03.444192  426243 pod_ready.go:82] duration metric: took 507.225554ms for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444203  426243 pod_ready.go:39] duration metric: took 6.091349359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:35:03.444226  426243 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:03.444294  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:03.488162  426243 api_server.go:72] duration metric: took 6.504085901s to wait for apiserver process to appear ...
	I0127 13:35:03.488197  426243 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:03.488224  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:35:03.493586  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 13:35:03.494867  426243 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:03.494894  426243 api_server.go:131] duration metric: took 6.689991ms to wait for apiserver health ...
	I0127 13:35:03.494903  426243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:03.575835  426243 system_pods.go:59] 9 kube-system pods found
	I0127 13:35:03.575871  426243 system_pods.go:61] "coredns-668d6bf9bc-9ncnm" [8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7] Running
	I0127 13:35:03.575877  426243 system_pods.go:61] "coredns-668d6bf9bc-hjncm" [68641e50-9f99-4811-9752-c7dc0db47502] Running
	I0127 13:35:03.575881  426243 system_pods.go:61] "etcd-embed-certs-174381" [fc5cb0ba-724d-4b3d-a6d0-65644ed57d99] Running
	I0127 13:35:03.575886  426243 system_pods.go:61] "kube-apiserver-embed-certs-174381" [7afdc2d3-86bd-480d-a081-e1475ff21346] Running
	I0127 13:35:03.575890  426243 system_pods.go:61] "kube-controller-manager-embed-certs-174381" [fa410171-2b30-4c79-97d4-87c1549fd75c] Running
	I0127 13:35:03.575894  426243 system_pods.go:61] "kube-proxy-cjsf9" [c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6] Running
	I0127 13:35:03.575901  426243 system_pods.go:61] "kube-scheduler-embed-certs-174381" [ab92b381-fb78-4aa1-bc55-4e47a58f2c32] Running
	I0127 13:35:03.575908  426243 system_pods.go:61] "metrics-server-f79f97bbb-hxlwf" [cb779c78-85f9-48e7-88c3-f087f57547e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:03.575913  426243 system_pods.go:61] "storage-provisioner" [3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1] Running
	I0127 13:35:03.575922  426243 system_pods.go:74] duration metric: took 81.012821ms to wait for pod list to return data ...
	I0127 13:35:03.575931  426243 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:03.772597  426243 default_sa.go:45] found service account: "default"
	I0127 13:35:03.772641  426243 default_sa.go:55] duration metric: took 196.700969ms for default service account to be created ...
	I0127 13:35:03.772655  426243 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:35:03.976966  426243 system_pods.go:87] 9 kube-system pods found
	I0127 13:35:05.375624  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:05.376167  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:05.376199  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:05.376124  429104 retry.go:31] will retry after 2.824398155s: waiting for domain to come up
	I0127 13:35:08.203385  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:08.203848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:08.203881  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:08.203823  429104 retry.go:31] will retry after 4.529537578s: waiting for domain to come up
	I0127 13:35:12.735786  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736343  429070 main.go:141] libmachine: (newest-cni-639843) found domain IP: 192.168.50.22
	I0127 13:35:12.736364  429070 main.go:141] libmachine: (newest-cni-639843) reserving static IP address...
	I0127 13:35:12.736378  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has current primary IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736707  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.736748  429070 main.go:141] libmachine: (newest-cni-639843) reserved static IP address 192.168.50.22 for domain newest-cni-639843
	I0127 13:35:12.736770  429070 main.go:141] libmachine: (newest-cni-639843) DBG | skip adding static IP to network mk-newest-cni-639843 - found existing host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"}
	I0127 13:35:12.736785  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Getting to WaitForSSH function...
	I0127 13:35:12.736810  429070 main.go:141] libmachine: (newest-cni-639843) waiting for SSH...
	I0127 13:35:12.739230  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739563  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.739592  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739721  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH client type: external
	I0127 13:35:12.739746  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa (-rw-------)
	I0127 13:35:12.739781  429070 main.go:141] libmachine: (newest-cni-639843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:35:12.739791  429070 main.go:141] libmachine: (newest-cni-639843) DBG | About to run SSH command:
	I0127 13:35:12.739800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | exit 0
	I0127 13:35:12.866664  429070 main.go:141] libmachine: (newest-cni-639843) DBG | SSH cmd err, output: <nil>: 
	I0127 13:35:12.867059  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetConfigRaw
	I0127 13:35:12.867776  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:12.870461  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.870943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.870979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.871221  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:35:12.871401  429070 machine.go:93] provisionDockerMachine start ...
	I0127 13:35:12.871421  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:12.871618  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.873979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874373  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.874411  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874581  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.874746  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.874903  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.875063  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.875221  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.875426  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.875440  429070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:35:12.979102  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:35:12.979140  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979406  429070 buildroot.go:166] provisioning hostname "newest-cni-639843"
	I0127 13:35:12.979435  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.982631  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983000  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.983025  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983170  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.983324  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983447  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983605  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.983809  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.984033  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.984051  429070 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-639843 && echo "newest-cni-639843" | sudo tee /etc/hostname
	I0127 13:35:13.107964  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-639843
	
	I0127 13:35:13.108004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.111168  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111589  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.111617  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111790  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.111995  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112158  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112289  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.112481  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.112709  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.112733  429070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-639843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-639843/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-639843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:35:13.226643  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:35:13.226683  429070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:35:13.226734  429070 buildroot.go:174] setting up certificates
	I0127 13:35:13.226749  429070 provision.go:84] configureAuth start
	I0127 13:35:13.226767  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:13.227060  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:13.230284  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230719  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.230752  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.233444  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233798  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.233832  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233972  429070 provision.go:143] copyHostCerts
	I0127 13:35:13.234039  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:35:13.234053  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:35:13.234146  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:35:13.234301  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:35:13.234313  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:35:13.234354  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:35:13.234450  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:35:13.234462  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:35:13.234497  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:35:13.234598  429070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.newest-cni-639843 san=[127.0.0.1 192.168.50.22 localhost minikube newest-cni-639843]
	I0127 13:35:13.505038  429070 provision.go:177] copyRemoteCerts
	I0127 13:35:13.505119  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:35:13.505154  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.508162  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508530  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.508555  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508759  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.508944  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.509117  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.509267  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:13.595888  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:35:13.621151  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:35:13.647473  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:35:13.673605  429070 provision.go:87] duration metric: took 446.83901ms to configureAuth
	I0127 13:35:13.673655  429070 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:35:13.673889  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:13.674004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.676982  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677392  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.677421  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677573  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.677762  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.677972  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.678123  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.678273  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.678496  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.678527  429070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:35:13.921465  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:35:13.921494  429070 machine.go:96] duration metric: took 1.050079095s to provisionDockerMachine
	I0127 13:35:13.921510  429070 start.go:293] postStartSetup for "newest-cni-639843" (driver="kvm2")
	I0127 13:35:13.921522  429070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:35:13.921543  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:13.921954  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:35:13.922025  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.925574  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.925941  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.926012  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.926266  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.926493  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.926675  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.926888  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.014753  429070 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:35:14.019344  429070 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:35:14.019374  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:35:14.019439  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:35:14.019540  429070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:35:14.019659  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:35:14.031277  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:14.060121  429070 start.go:296] duration metric: took 138.59357ms for postStartSetup
	I0127 13:35:14.060165  429070 fix.go:56] duration metric: took 23.600678344s for fixHost
	I0127 13:35:14.060188  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.063145  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063514  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.063542  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063761  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.063980  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064176  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064340  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.064541  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:14.064724  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:14.064738  429070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:35:14.172785  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984914.150810987
	
	I0127 13:35:14.172823  429070 fix.go:216] guest clock: 1737984914.150810987
	I0127 13:35:14.172832  429070 fix.go:229] Guest: 2025-01-27 13:35:14.150810987 +0000 UTC Remote: 2025-01-27 13:35:14.060169498 +0000 UTC m=+23.763612053 (delta=90.641489ms)
	I0127 13:35:14.172889  429070 fix.go:200] guest clock delta is within tolerance: 90.641489ms
	I0127 13:35:14.172905  429070 start.go:83] releasing machines lock for "newest-cni-639843", held for 23.713435883s
	I0127 13:35:14.172938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.173202  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:14.176163  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176559  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.176600  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177182  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177351  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177450  429070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:35:14.177498  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.177596  429070 ssh_runner.go:195] Run: cat /version.json
	I0127 13:35:14.177625  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.180456  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180561  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180838  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180910  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.180914  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180944  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.181150  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.181189  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181344  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181357  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181546  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181536  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.181739  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.283980  429070 ssh_runner.go:195] Run: systemctl --version
	I0127 13:35:14.290329  429070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:35:14.450608  429070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:35:14.461512  429070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:35:14.461597  429070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:35:14.482924  429070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:35:14.482951  429070 start.go:495] detecting cgroup driver to use...
	I0127 13:35:14.483022  429070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:35:14.503452  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:35:14.517592  429070 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:35:14.517659  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:35:14.532792  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:35:14.547306  429070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:35:14.671116  429070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:35:14.818034  429070 docker.go:233] disabling docker service ...
	I0127 13:35:14.818133  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:35:14.832550  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:35:14.845137  429070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:35:14.986833  429070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:35:15.122943  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:35:15.137706  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:35:15.157591  429070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:35:15.157669  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.168185  429070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:35:15.168268  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.178876  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.188792  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.198951  429070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:35:15.209169  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.219549  429070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.238633  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.249729  429070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:35:15.259178  429070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:35:15.259244  429070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:35:15.272097  429070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:35:15.281611  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:15.403472  429070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:35:15.498842  429070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:35:15.498928  429070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:35:15.505405  429070 start.go:563] Will wait 60s for crictl version
	I0127 13:35:15.505478  429070 ssh_runner.go:195] Run: which crictl
	I0127 13:35:15.509869  429070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:35:15.580026  429070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:35:15.580122  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.609376  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.643173  429070 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:35:15.644483  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:15.647483  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.647905  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:15.647930  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.648148  429070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:35:15.652911  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:15.668696  429070 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:35:15.670127  429070 kubeadm.go:883] updating cluster {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:35:15.670264  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:35:15.670328  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:15.716362  429070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:35:15.716455  429070 ssh_runner.go:195] Run: which lz4
	I0127 13:35:15.721254  429070 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:35:15.727443  429070 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:35:15.727478  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 13:35:17.208454  429070 crio.go:462] duration metric: took 1.487249966s to copy over tarball
	I0127 13:35:17.208542  429070 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:35:19.421239  429070 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.212662568s)
	I0127 13:35:19.421271  429070 crio.go:469] duration metric: took 2.21278342s to extract the tarball
	I0127 13:35:19.421281  429070 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:35:19.461756  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:19.504974  429070 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:35:19.505005  429070 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:35:19.505015  429070 kubeadm.go:934] updating node { 192.168.50.22 8443 v1.32.1 crio true true} ...
	I0127 13:35:19.505173  429070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-639843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:35:19.505269  429070 ssh_runner.go:195] Run: crio config
	I0127 13:35:19.556732  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:19.556754  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:19.556766  429070 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:35:19.556791  429070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.22 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639843 NodeName:newest-cni-639843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:35:19.556951  429070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:35:19.557032  429070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:35:19.567405  429070 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:35:19.567483  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:35:19.577572  429070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 13:35:19.595555  429070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:35:19.612336  429070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 13:35:19.630199  429070 ssh_runner.go:195] Run: grep 192.168.50.22	control-plane.minikube.internal$ /etc/hosts
	I0127 13:35:19.634268  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:19.646912  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:19.764087  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:19.783083  429070 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843 for IP: 192.168.50.22
	I0127 13:35:19.783115  429070 certs.go:194] generating shared ca certs ...
	I0127 13:35:19.783139  429070 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:19.783330  429070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:35:19.783386  429070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:35:19.783400  429070 certs.go:256] generating profile certs ...
	I0127 13:35:19.783534  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/client.key
	I0127 13:35:19.783619  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key.505bfb94
	I0127 13:35:19.783671  429070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key
	I0127 13:35:19.783826  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:35:19.783866  429070 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:35:19.783880  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:35:19.783913  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:35:19.783939  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:35:19.783961  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:35:19.784010  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:19.784667  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:35:19.821550  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:35:19.860184  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:35:19.893311  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:35:19.926181  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:35:19.954565  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:35:19.997938  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:35:20.022058  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:35:20.045748  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:35:20.069279  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:35:20.092959  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:35:20.117180  429070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:35:20.135202  429070 ssh_runner.go:195] Run: openssl version
	I0127 13:35:20.141197  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:35:20.152160  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156810  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156871  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.162645  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:35:20.174920  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:35:20.187426  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192129  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192174  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.198019  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:35:20.210195  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:35:20.220934  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225588  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225622  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.231516  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:35:20.243779  429070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:35:20.248511  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:35:20.254523  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:35:20.260441  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:35:20.266429  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:35:20.272290  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:35:20.278051  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:35:20.284024  429070 kubeadm.go:392] StartCluster: {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:35:20.284105  429070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:35:20.284164  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.332523  429070 cri.go:89] found id: ""
	I0127 13:35:20.332587  429070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:35:20.344932  429070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:35:20.344959  429070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:35:20.345011  429070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:35:20.355729  429070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:35:20.356795  429070 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-639843" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:20.357505  429070 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-639843" cluster setting kubeconfig missing "newest-cni-639843" context setting]
	I0127 13:35:20.358374  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:20.360037  429070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:35:20.371572  429070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.22
	I0127 13:35:20.371606  429070 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:35:20.371622  429070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:35:20.371679  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.418797  429070 cri.go:89] found id: ""
	I0127 13:35:20.418873  429070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:35:20.437304  429070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:35:20.447636  429070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:35:20.447660  429070 kubeadm.go:157] found existing configuration files:
	
	I0127 13:35:20.447704  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:35:20.458280  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:35:20.458335  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:35:20.469304  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:35:20.478639  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:35:20.478689  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:35:20.488624  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.497867  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:35:20.497908  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.507379  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:35:20.516362  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:35:20.516416  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:35:20.525787  429070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:35:20.542646  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:20.671597  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.498726  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.899789  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.965210  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:22.062165  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:22.062252  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:22.563318  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.063066  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.082649  429070 api_server.go:72] duration metric: took 1.020482627s to wait for apiserver process to appear ...
	I0127 13:35:23.082686  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:23.082711  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:23.083244  429070 api_server.go:269] stopped: https://192.168.50.22:8443/healthz: Get "https://192.168.50.22:8443/healthz": dial tcp 192.168.50.22:8443: connect: connection refused
	I0127 13:35:23.583699  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.503776  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:35:25.503807  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:35:25.503825  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.547403  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.547434  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:25.583659  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.589328  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.589357  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.082833  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.087881  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.087908  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.583159  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.592115  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.592148  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:27.083703  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.090407  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.098905  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.098928  429070 api_server.go:131] duration metric: took 4.01623437s to wait for apiserver health ...
	I0127 13:35:27.098938  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:27.098944  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:27.100651  429070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:35:27.101855  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:35:27.116286  429070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:35:27.139348  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.158680  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.158717  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.158730  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.158741  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.158748  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.158757  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:35:27.158766  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.158776  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.158785  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:35:27.158819  429070 system_pods.go:74] duration metric: took 19.446392ms to wait for pod list to return data ...
	I0127 13:35:27.158832  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.168338  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.168376  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.168392  429070 node_conditions.go:105] duration metric: took 9.550643ms to run NodePressure ...
	I0127 13:35:27.168416  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:27.459759  429070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:35:27.473184  429070 ops.go:34] apiserver oom_adj: -16
	I0127 13:35:27.473212  429070 kubeadm.go:597] duration metric: took 7.128244476s to restartPrimaryControlPlane
	I0127 13:35:27.473226  429070 kubeadm.go:394] duration metric: took 7.18920723s to StartCluster
	I0127 13:35:27.473251  429070 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.473341  429070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:27.475111  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.475373  429070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:35:27.475451  429070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:35:27.475562  429070 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-639843"
	I0127 13:35:27.475584  429070 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-639843"
	W0127 13:35:27.475598  429070 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:35:27.475598  429070 addons.go:69] Setting dashboard=true in profile "newest-cni-639843"
	I0127 13:35:27.475600  429070 addons.go:69] Setting metrics-server=true in profile "newest-cni-639843"
	I0127 13:35:27.475621  429070 addons.go:238] Setting addon dashboard=true in "newest-cni-639843"
	I0127 13:35:27.475629  429070 addons.go:238] Setting addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:27.475639  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	W0127 13:35:27.475643  429070 addons.go:247] addon metrics-server should already be in state true
	I0127 13:35:27.475676  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.475582  429070 addons.go:69] Setting default-storageclass=true in profile "newest-cni-639843"
	I0127 13:35:27.475611  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:27.475708  429070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-639843"
	W0127 13:35:27.475630  429070 addons.go:247] addon dashboard should already be in state true
	I0127 13:35:27.475812  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.476070  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476077  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476115  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476134  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476159  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476168  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476195  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476204  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.477011  429070 out.go:177] * Verifying Kubernetes components...
	I0127 13:35:27.478509  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:27.493703  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0127 13:35:27.493801  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I0127 13:35:27.493955  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0127 13:35:27.494221  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494259  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494795  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494819  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.494840  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494932  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494956  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495188  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495296  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495464  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.495481  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495764  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495798  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495812  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495819  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495871  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.496119  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0127 13:35:27.496433  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.496529  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.496572  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.496893  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.496916  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.497264  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.497502  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.502029  429070 addons.go:238] Setting addon default-storageclass=true in "newest-cni-639843"
	W0127 13:35:27.502051  429070 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:35:27.502080  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.502830  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.502873  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.512816  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I0127 13:35:27.513096  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0127 13:35:27.513275  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I0127 13:35:27.535151  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535226  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535266  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535748  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535766  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.535769  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535791  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.536087  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536347  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536392  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.536559  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.537321  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.537343  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.537676  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.537946  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.538406  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539127  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539700  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.540468  429070 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:35:27.540479  429070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:35:27.541259  429070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:35:27.542133  429070 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.542154  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:35:27.542174  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.542782  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:35:27.542801  429070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:35:27.542820  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.543610  429070 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:35:27.544743  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:35:27.544762  429070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:35:27.544780  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.545935  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546330  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.546364  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546495  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.546708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.546872  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.547017  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.547822  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548084  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548244  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548291  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548448  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548585  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548619  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548786  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548800  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.548938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548980  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.549036  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.549180  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.554799  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0127 13:35:27.555253  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.555780  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.555800  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.556187  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.556616  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.556646  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.574277  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0127 13:35:27.574815  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.575396  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.575420  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.575741  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.575966  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.577346  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.577556  429070 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.577574  429070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:35:27.577594  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.580061  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580408  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.580432  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580659  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.580836  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.580987  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.581148  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.713210  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:27.737971  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:27.738049  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:27.755609  429070 api_server.go:72] duration metric: took 280.198045ms to wait for apiserver process to appear ...
	I0127 13:35:27.755639  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:27.755660  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.765216  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.767614  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.767639  429070 api_server.go:131] duration metric: took 11.991322ms to wait for apiserver health ...
	I0127 13:35:27.767650  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.781696  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.781778  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.781799  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.781815  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.781827  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.781836  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running
	I0127 13:35:27.781862  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.781874  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.781884  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running
	I0127 13:35:27.781895  429070 system_pods.go:74] duration metric: took 14.236485ms to wait for pod list to return data ...
	I0127 13:35:27.781908  429070 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:27.787854  429070 default_sa.go:45] found service account: "default"
	I0127 13:35:27.787884  429070 default_sa.go:55] duration metric: took 5.965578ms for default service account to be created ...
	I0127 13:35:27.787899  429070 kubeadm.go:582] duration metric: took 312.493014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:35:27.787924  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.793927  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.793949  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.793961  429070 node_conditions.go:105] duration metric: took 6.028431ms to run NodePressure ...
	I0127 13:35:27.793975  429070 start.go:241] waiting for startup goroutines ...
	I0127 13:35:27.806081  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.851437  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.912936  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:35:27.912967  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:35:27.941546  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:35:27.941579  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:35:28.017628  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:35:28.017663  429070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:35:28.027973  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:35:28.028016  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:35:28.097111  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.097146  429070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:35:28.148404  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:35:28.148439  429070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:35:28.272234  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.273446  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:35:28.273473  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:35:28.324863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:35:28.324897  429070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:35:28.400474  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:35:28.400504  429070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:35:28.460550  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:35:28.460583  429070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:35:28.508999  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:35:28.509031  429070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:35:28.555538  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555570  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.555889  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.555906  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.555915  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555923  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.556151  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.556180  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.556196  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.564252  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.564277  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.564553  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.564574  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.564893  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.605863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:28.605896  429070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:35:28.650259  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:29.517093  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.66560932s)
	I0127 13:35:29.517160  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517173  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517607  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:29.517645  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517655  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.517664  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517672  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517974  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517996  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741184  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46890411s)
	I0127 13:35:29.741241  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741252  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.741558  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.741576  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741586  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741609  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.742656  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.742680  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.742692  429070 addons.go:479] Verifying addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:29.742659  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:30.069134  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.418812542s)
	I0127 13:35:30.069214  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069233  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069539  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069559  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.069568  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069575  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069840  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069856  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.071209  429070 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-639843 addons enable metrics-server
	
	I0127 13:35:30.072569  429070 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:35:30.073970  429070 addons.go:514] duration metric: took 2.598533083s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:35:30.074007  429070 start.go:246] waiting for cluster config update ...
	I0127 13:35:30.074019  429070 start.go:255] writing updated cluster config ...
	I0127 13:35:30.074258  429070 ssh_runner.go:195] Run: rm -f paused
	I0127 13:35:30.125745  429070 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:35:30.127324  429070 out.go:177] * Done! kubectl is now configured to use "newest-cni-639843" cluster and "default" namespace by default
	I0127 13:35:41.313958  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:35:41.315406  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:41.315596  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:46.316260  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:46.316520  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:56.316974  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:56.317208  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:16.318338  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:16.318524  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320677  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:56.320945  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320963  427154 kubeadm.go:310] 
	I0127 13:36:56.321020  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:36:56.321085  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:36:56.321099  427154 kubeadm.go:310] 
	I0127 13:36:56.321165  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:36:56.321228  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:36:56.321357  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:36:56.321378  427154 kubeadm.go:310] 
	I0127 13:36:56.321499  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:36:56.321545  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:36:56.321574  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:36:56.321580  427154 kubeadm.go:310] 
	I0127 13:36:56.321720  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:36:56.321827  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:36:56.321840  427154 kubeadm.go:310] 
	I0127 13:36:56.321935  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:36:56.322018  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:36:56.322099  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:36:56.322162  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:36:56.322169  427154 kubeadm.go:310] 
	I0127 13:36:56.323303  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:36:56.323399  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:36:56.323478  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 13:36:56.323617  427154 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:36:56.323664  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:36:56.804696  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:36:56.819996  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:36:56.830103  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:36:56.830120  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:36:56.830161  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:36:56.839297  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:36:56.839351  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:36:56.848603  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:36:56.857433  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:36:56.857500  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:36:56.867735  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.876669  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:36:56.876721  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.885857  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:36:56.894734  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:36:56.894788  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:36:56.904112  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:36:56.975515  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:36:56.975724  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:36:57.110596  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:36:57.110748  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:36:57.110890  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:36:57.287182  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:36:57.289124  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:36:57.289247  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:36:57.289310  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:36:57.289405  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:36:57.289504  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:36:57.289595  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:36:57.289665  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:36:57.289780  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:36:57.290345  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:36:57.291337  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:36:57.292274  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:36:57.292554  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:36:57.292622  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:36:57.586245  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:36:57.746278  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:36:57.846816  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:36:57.985775  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:36:58.007369  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:36:58.008417  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:36:58.008485  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:36:58.134182  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:36:58.136066  427154 out.go:235]   - Booting up control plane ...
	I0127 13:36:58.136194  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:36:58.148785  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:36:58.148921  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:36:58.149274  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:36:58.153395  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:37:38.155987  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:37:38.156613  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:38.156831  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:43.157356  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:43.157567  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:53.158341  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:53.158675  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:13.158624  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:13.158876  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157583  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:53.157824  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157839  427154 kubeadm.go:310] 
	I0127 13:38:53.157896  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:38:53.157954  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:38:53.157966  427154 kubeadm.go:310] 
	I0127 13:38:53.158014  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:38:53.158064  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:38:53.158222  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:38:53.158234  427154 kubeadm.go:310] 
	I0127 13:38:53.158404  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:38:53.158453  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:38:53.158483  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:38:53.158491  427154 kubeadm.go:310] 
	I0127 13:38:53.158624  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:38:53.158726  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:38:53.158741  427154 kubeadm.go:310] 
	I0127 13:38:53.158894  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:38:53.159040  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:38:53.159165  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:38:53.159264  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:38:53.159275  427154 kubeadm.go:310] 
	I0127 13:38:53.159902  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:38:53.160042  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:38:53.160128  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:38:53.160213  427154 kubeadm.go:394] duration metric: took 8m2.798471593s to StartCluster
	I0127 13:38:53.160286  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:38:53.160377  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:38:53.205471  427154 cri.go:89] found id: ""
	I0127 13:38:53.205496  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.205504  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:38:53.205510  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:38:53.205577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:38:53.240500  427154 cri.go:89] found id: ""
	I0127 13:38:53.240532  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.240543  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:38:53.240564  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:38:53.240625  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:38:53.282232  427154 cri.go:89] found id: ""
	I0127 13:38:53.282267  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.282279  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:38:53.282287  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:38:53.282354  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:38:53.315589  427154 cri.go:89] found id: ""
	I0127 13:38:53.315643  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.315659  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:38:53.315666  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:38:53.315735  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:38:53.349806  427154 cri.go:89] found id: ""
	I0127 13:38:53.349836  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.349844  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:38:53.349850  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:38:53.349906  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:38:53.382052  427154 cri.go:89] found id: ""
	I0127 13:38:53.382084  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.382095  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:38:53.382103  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:38:53.382176  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:38:53.416057  427154 cri.go:89] found id: ""
	I0127 13:38:53.416091  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.416103  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:38:53.416120  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:38:53.416185  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:38:53.449983  427154 cri.go:89] found id: ""
	I0127 13:38:53.450017  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.450029  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:38:53.450046  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:38:53.450064  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:38:53.498208  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:38:53.498242  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:38:53.552441  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:38:53.552472  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:38:53.567811  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:38:53.567841  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:38:53.646625  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:38:53.646651  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:38:53.646667  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 13:38:53.748675  427154 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:38:53.748747  427154 out.go:270] * 
	W0127 13:38:53.748849  427154 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.748865  427154 out.go:270] * 
	W0127 13:38:53.749670  427154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:38:53.753264  427154 out.go:201] 
	W0127 13:38:53.754315  427154 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.754372  427154 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:38:53.754397  427154 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:38:53.755624  427154 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.719376898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986095719352567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e5e9d41-70cb-4c6c-a358-3911a3a61342 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.720084618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bdc9862-1175-4cd4-9b5f-1d7498767ce2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.720149067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bdc9862-1175-4cd4-9b5f-1d7498767ce2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.720389426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b72cf3fa4f8149ee6eb5119a64362fa3de8e2918e97a0bcc9658d15bf83fe20,PodSandboxId:e00bef3bcb3e3e9dba6a7811fff550e9a392c5ac9d92195098da5c9be9854e26,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737985882086443079,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-kfgkj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 96b88db1-1c62-4298-9ad1-437085020af8,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bcd6316e521e4e42d783b7c2cd4308138a6fbce6a8ca48d7cc7b73c2cd9861,PodSandboxId:1651acbef779c85403901eaeb35d210099722f8885f2f22decf7a3aee8f69878,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984916576964350,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dvg4b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 997b68ba-22cd-42cf-a9a6-633a361c2af7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69839fcf5327e62d4ae92d2c9bcabd851ee2194c7b2f3f910c78fd577c97f35a,PodSandboxId:fbb770fdd62e5949e919b800dfb41368f881e327d6b8e8a6a0276b73bffe082e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984899036464003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed597ee701a2497073e3af9dd358a11e196b1625d097b2dc4bb68a96f42e9eda,PodSandboxId:395bc3cb00a48d810566d4b696e74752f66600a84f52fe200010199e9c9ffd4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898475100713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hjncm,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 68641e50-9f99-4811-9752-c7dc0db47502,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ef3eda416e4caf07358e872fef32f2e2ee423e06ddd5a929c66d13447653ce,PodSandboxId:96ee1437a5ec61f829b30de99bea6880533a70195f3a216ab9ca8c91430d9df0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898214579035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9ncnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab57403598ff461ac338d9860621cc34417fa4df166cc9be49c667666d0f80d,PodSandboxId:e38d5d8508ab99270398e9c050ceeffda478d11670dc4672e573ff1a5eb7e785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984897082217337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cjsf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0ff8f0ff9f19054f1d06ce383406166899902d86c04c9c66d020b4f6bd9cc4,PodSandboxId:223ebf43db05e1dd32585880953b4c5132d3881cea28181f78c69c4ae603a797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984885921834485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559c2dd6ab9d4ea526efe8e9708e08ee,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718423e950e47aded11a5a88caa034d0993357baf0abfca8a57fd4f78b84907a,PodSandboxId:7c5c85aa511a9051fffc0b71445a018b589b831fb822755bb69e18716d6c6517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b5
98fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984885900747155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eb07855ee228cc800e7f829cc67fc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47657543862e30cf3cd220ed62287f082f2ca7a6e4a0382e180e6817c9774a,PodSandboxId:b6d90431f5114e0addaf1c65e9ec19b96f9b7e36ce75118e08ef9df71cec37ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3a
d8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984885809950549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dba6850448ed3c704de746edb1e3a9cb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b27ce31f5451880e28035ff21e918548dd1963b152659597fbd744ea190db77,PodSandboxId:bd135f2af193f7d78c0091182d87cbd9c249c0e308b39793cbd7db37a6d4bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984885830691868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15884f8d140f189530498c89f50dd9fda5897d7079f30ed7030fb33297714036,PodSandboxId:7f8ca46086f8259097c039b4b70a0490df1f8db3bc61159fa3f771b940a67d34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984559428302761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bdc9862-1175-4cd4-9b5f-1d7498767ce2 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.763408662Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=513530d9-2c29-46d0-9758-54a714f8f789 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.763499855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=513530d9-2c29-46d0-9758-54a714f8f789 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.764730069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20868e08-d9cc-4f50-8b83-f7f909033442 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.765242114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986095765220637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20868e08-d9cc-4f50-8b83-f7f909033442 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.765645889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8809bd03-f05f-4629-812b-52a44c3f7281 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.765719714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8809bd03-f05f-4629-812b-52a44c3f7281 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.766024129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b72cf3fa4f8149ee6eb5119a64362fa3de8e2918e97a0bcc9658d15bf83fe20,PodSandboxId:e00bef3bcb3e3e9dba6a7811fff550e9a392c5ac9d92195098da5c9be9854e26,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737985882086443079,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-kfgkj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 96b88db1-1c62-4298-9ad1-437085020af8,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bcd6316e521e4e42d783b7c2cd4308138a6fbce6a8ca48d7cc7b73c2cd9861,PodSandboxId:1651acbef779c85403901eaeb35d210099722f8885f2f22decf7a3aee8f69878,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984916576964350,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dvg4b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 997b68ba-22cd-42cf-a9a6-633a361c2af7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69839fcf5327e62d4ae92d2c9bcabd851ee2194c7b2f3f910c78fd577c97f35a,PodSandboxId:fbb770fdd62e5949e919b800dfb41368f881e327d6b8e8a6a0276b73bffe082e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984899036464003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed597ee701a2497073e3af9dd358a11e196b1625d097b2dc4bb68a96f42e9eda,PodSandboxId:395bc3cb00a48d810566d4b696e74752f66600a84f52fe200010199e9c9ffd4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898475100713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hjncm,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 68641e50-9f99-4811-9752-c7dc0db47502,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ef3eda416e4caf07358e872fef32f2e2ee423e06ddd5a929c66d13447653ce,PodSandboxId:96ee1437a5ec61f829b30de99bea6880533a70195f3a216ab9ca8c91430d9df0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898214579035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9ncnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab57403598ff461ac338d9860621cc34417fa4df166cc9be49c667666d0f80d,PodSandboxId:e38d5d8508ab99270398e9c050ceeffda478d11670dc4672e573ff1a5eb7e785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984897082217337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cjsf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0ff8f0ff9f19054f1d06ce383406166899902d86c04c9c66d020b4f6bd9cc4,PodSandboxId:223ebf43db05e1dd32585880953b4c5132d3881cea28181f78c69c4ae603a797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984885921834485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559c2dd6ab9d4ea526efe8e9708e08ee,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718423e950e47aded11a5a88caa034d0993357baf0abfca8a57fd4f78b84907a,PodSandboxId:7c5c85aa511a9051fffc0b71445a018b589b831fb822755bb69e18716d6c6517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b5
98fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984885900747155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eb07855ee228cc800e7f829cc67fc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47657543862e30cf3cd220ed62287f082f2ca7a6e4a0382e180e6817c9774a,PodSandboxId:b6d90431f5114e0addaf1c65e9ec19b96f9b7e36ce75118e08ef9df71cec37ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3a
d8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984885809950549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dba6850448ed3c704de746edb1e3a9cb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b27ce31f5451880e28035ff21e918548dd1963b152659597fbd744ea190db77,PodSandboxId:bd135f2af193f7d78c0091182d87cbd9c249c0e308b39793cbd7db37a6d4bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984885830691868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15884f8d140f189530498c89f50dd9fda5897d7079f30ed7030fb33297714036,PodSandboxId:7f8ca46086f8259097c039b4b70a0490df1f8db3bc61159fa3f771b940a67d34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984559428302761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8809bd03-f05f-4629-812b-52a44c3f7281 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.796670865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c3c0db2-14ed-4382-bef2-e0764ce2bb5e name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.796735680Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c3c0db2-14ed-4382-bef2-e0764ce2bb5e name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.798116134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48b20a1e-2c73-406a-91a6-772354efd8ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.798544952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986095798523385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48b20a1e-2c73-406a-91a6-772354efd8ac name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.799183318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6b947dd-dd84-4d64-9023-0f4436e79225 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.799252603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6b947dd-dd84-4d64-9023-0f4436e79225 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.799496112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b72cf3fa4f8149ee6eb5119a64362fa3de8e2918e97a0bcc9658d15bf83fe20,PodSandboxId:e00bef3bcb3e3e9dba6a7811fff550e9a392c5ac9d92195098da5c9be9854e26,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737985882086443079,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-kfgkj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 96b88db1-1c62-4298-9ad1-437085020af8,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bcd6316e521e4e42d783b7c2cd4308138a6fbce6a8ca48d7cc7b73c2cd9861,PodSandboxId:1651acbef779c85403901eaeb35d210099722f8885f2f22decf7a3aee8f69878,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984916576964350,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dvg4b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 997b68ba-22cd-42cf-a9a6-633a361c2af7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69839fcf5327e62d4ae92d2c9bcabd851ee2194c7b2f3f910c78fd577c97f35a,PodSandboxId:fbb770fdd62e5949e919b800dfb41368f881e327d6b8e8a6a0276b73bffe082e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984899036464003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed597ee701a2497073e3af9dd358a11e196b1625d097b2dc4bb68a96f42e9eda,PodSandboxId:395bc3cb00a48d810566d4b696e74752f66600a84f52fe200010199e9c9ffd4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898475100713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hjncm,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 68641e50-9f99-4811-9752-c7dc0db47502,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ef3eda416e4caf07358e872fef32f2e2ee423e06ddd5a929c66d13447653ce,PodSandboxId:96ee1437a5ec61f829b30de99bea6880533a70195f3a216ab9ca8c91430d9df0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898214579035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9ncnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab57403598ff461ac338d9860621cc34417fa4df166cc9be49c667666d0f80d,PodSandboxId:e38d5d8508ab99270398e9c050ceeffda478d11670dc4672e573ff1a5eb7e785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984897082217337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cjsf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0ff8f0ff9f19054f1d06ce383406166899902d86c04c9c66d020b4f6bd9cc4,PodSandboxId:223ebf43db05e1dd32585880953b4c5132d3881cea28181f78c69c4ae603a797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984885921834485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559c2dd6ab9d4ea526efe8e9708e08ee,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718423e950e47aded11a5a88caa034d0993357baf0abfca8a57fd4f78b84907a,PodSandboxId:7c5c85aa511a9051fffc0b71445a018b589b831fb822755bb69e18716d6c6517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b5
98fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984885900747155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eb07855ee228cc800e7f829cc67fc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47657543862e30cf3cd220ed62287f082f2ca7a6e4a0382e180e6817c9774a,PodSandboxId:b6d90431f5114e0addaf1c65e9ec19b96f9b7e36ce75118e08ef9df71cec37ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3a
d8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984885809950549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dba6850448ed3c704de746edb1e3a9cb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b27ce31f5451880e28035ff21e918548dd1963b152659597fbd744ea190db77,PodSandboxId:bd135f2af193f7d78c0091182d87cbd9c249c0e308b39793cbd7db37a6d4bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984885830691868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15884f8d140f189530498c89f50dd9fda5897d7079f30ed7030fb33297714036,PodSandboxId:7f8ca46086f8259097c039b4b70a0490df1f8db3bc61159fa3f771b940a67d34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984559428302761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6b947dd-dd84-4d64-9023-0f4436e79225 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.840989019Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00cfccdc-c83b-4521-a414-0c1a6319d405 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.841079624Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00cfccdc-c83b-4521-a414-0c1a6319d405 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.841927095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e30e607d-a810-44f4-b727-d5fae135ca1b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.842369076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986095842349499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e30e607d-a810-44f4-b727-d5fae135ca1b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.842901031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26959c1d-ab68-4d5d-8c01-808a66677304 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.842968908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26959c1d-ab68-4d5d-8c01-808a66677304 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:55 embed-certs-174381 crio[716]: time="2025-01-27 13:54:55.843247884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b72cf3fa4f8149ee6eb5119a64362fa3de8e2918e97a0bcc9658d15bf83fe20,PodSandboxId:e00bef3bcb3e3e9dba6a7811fff550e9a392c5ac9d92195098da5c9be9854e26,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737985882086443079,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-kfgkj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 96b88db1-1c62-4298-9ad1-437085020af8,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23bcd6316e521e4e42d783b7c2cd4308138a6fbce6a8ca48d7cc7b73c2cd9861,PodSandboxId:1651acbef779c85403901eaeb35d210099722f8885f2f22decf7a3aee8f69878,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737984916576964350,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-dvg4b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: 997b68ba-22cd-42cf-a9a6-633a361c2af7,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69839fcf5327e62d4ae92d2c9bcabd851ee2194c7b2f3f910c78fd577c97f35a,PodSandboxId:fbb770fdd62e5949e919b800dfb41368f881e327d6b8e8a6a0276b73bffe082e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737984899036464003,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed597ee701a2497073e3af9dd358a11e196b1625d097b2dc4bb68a96f42e9eda,PodSandboxId:395bc3cb00a48d810566d4b696e74752f66600a84f52fe200010199e9c9ffd4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898475100713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-hjncm,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 68641e50-9f99-4811-9752-c7dc0db47502,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19ef3eda416e4caf07358e872fef32f2e2ee423e06ddd5a929c66d13447653ce,PodSandboxId:96ee1437a5ec61f829b30de99bea6880533a70195f3a216ab9ca8c91430d9df0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737984898214579035,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9ncnm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab57403598ff461ac338d9860621cc34417fa4df166cc9be49c667666d0f80d,PodSandboxId:e38d5d8508ab99270398e9c050ceeffda478d11670dc4672e573ff1a5eb7e785,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1737984897082217337,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cjsf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc0ff8f0ff9f19054f1d06ce383406166899902d86c04c9c66d020b4f6bd9cc4,PodSandboxId:223ebf43db05e1dd32585880953b4c5132d3881cea28181f78c69c4ae603a797,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c311
3e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1737984885921834485,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 559c2dd6ab9d4ea526efe8e9708e08ee,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:718423e950e47aded11a5a88caa034d0993357baf0abfca8a57fd4f78b84907a,PodSandboxId:7c5c85aa511a9051fffc0b71445a018b589b831fb822755bb69e18716d6c6517,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b5
98fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1737984885900747155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eb07855ee228cc800e7f829cc67fc5,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c47657543862e30cf3cd220ed62287f082f2ca7a6e4a0382e180e6817c9774a,PodSandboxId:b6d90431f5114e0addaf1c65e9ec19b96f9b7e36ce75118e08ef9df71cec37ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3a
d8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737984885809950549,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dba6850448ed3c704de746edb1e3a9cb,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b27ce31f5451880e28035ff21e918548dd1963b152659597fbd744ea190db77,PodSandboxId:bd135f2af193f7d78c0091182d87cbd9c249c0e308b39793cbd7db37a6d4bbf5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,
Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1737984885830691868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15884f8d140f189530498c89f50dd9fda5897d7079f30ed7030fb33297714036,PodSandboxId:7f8ca46086f8259097c039b4b70a0490df1f8db3bc61159fa3f771b940a67d34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:
map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1737984559428302761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-174381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca91de734b13ddb683e149b359434fc7,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26959c1d-ab68-4d5d-8c01-808a66677304 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5b72cf3fa4f81       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 minutes ago       Exited              dashboard-metrics-scraper   8                   e00bef3bcb3e3       dashboard-metrics-scraper-86c6bf9756-kfgkj
	23bcd6316e521       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   19 minutes ago      Running             kubernetes-dashboard        0                   1651acbef779c       kubernetes-dashboard-7779f9b69b-dvg4b
	69839fcf5327e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 minutes ago      Running             storage-provisioner         0                   fbb770fdd62e5       storage-provisioner
	ed597ee701a24       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           19 minutes ago      Running             coredns                     0                   395bc3cb00a48       coredns-668d6bf9bc-hjncm
	19ef3eda416e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           19 minutes ago      Running             coredns                     0                   96ee1437a5ec6       coredns-668d6bf9bc-9ncnm
	fab57403598ff       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                           19 minutes ago      Running             kube-proxy                  0                   e38d5d8508ab9       kube-proxy-cjsf9
	bc0ff8f0ff9f1       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                           20 minutes ago      Running             kube-scheduler              2                   223ebf43db05e       kube-scheduler-embed-certs-174381
	718423e950e47       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                           20 minutes ago      Running             kube-controller-manager     3                   7c5c85aa511a9       kube-controller-manager-embed-certs-174381
	8b27ce31f5451       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           20 minutes ago      Running             kube-apiserver              3                   bd135f2af193f       kube-apiserver-embed-certs-174381
	2c47657543862       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           20 minutes ago      Running             etcd                        2                   b6d90431f5114       etcd-embed-certs-174381
	15884f8d140f1       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                           25 minutes ago      Exited              kube-apiserver              2                   7f8ca46086f82       kube-apiserver-embed-certs-174381
	
	
	==> coredns [19ef3eda416e4caf07358e872fef32f2e2ee423e06ddd5a929c66d13447653ce] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ed597ee701a2497073e3af9dd358a11e196b1625d097b2dc4bb68a96f42e9eda] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-174381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-174381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b
	                    minikube.k8s.io/name=embed-certs-174381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T13_34_52_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 13:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-174381
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 13:54:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 13:49:58 +0000   Mon, 27 Jan 2025 13:34:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 13:49:58 +0000   Mon, 27 Jan 2025 13:34:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 13:49:58 +0000   Mon, 27 Jan 2025 13:34:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 13:49:58 +0000   Mon, 27 Jan 2025 13:34:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    embed-certs-174381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 93cdee00c290475fab7741cf71d7ecef
	  System UUID:                93cdee00-c290-475f-ab77-41cf71d7ecef
	  Boot ID:                    0ccc34c7-47d7-4557-9941-643943dff663
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-9ncnm                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-668d6bf9bc-hjncm                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-embed-certs-174381                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kube-apiserver-embed-certs-174381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-embed-certs-174381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-cjsf9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-embed-certs-174381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-f79f97bbb-hxlwf                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-kfgkj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-dvg4b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 20m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m   kubelet          Node embed-certs-174381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m   kubelet          Node embed-certs-174381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m   kubelet          Node embed-certs-174381 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node embed-certs-174381 event: Registered Node embed-certs-174381 in Controller
	
	
	==> dmesg <==
	[  +0.040720] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.388092] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.075213] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.686736] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.895037] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.062009] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058693] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.184208] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.160595] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.282609] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.297439] systemd-fstab-generator[797]: Ignoring "noauto" option for root device
	[  +0.065733] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.401777] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[Jan27 13:29] kauditd_printk_skb: 87 callbacks suppressed
	[ +35.450748] kauditd_printk_skb: 86 callbacks suppressed
	[Jan27 13:34] kauditd_printk_skb: 6 callbacks suppressed
	[ +23.230960] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +7.093390] systemd-fstab-generator[3178]: Ignoring "noauto" option for root device
	[  +0.114797] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.209232] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.098933] systemd-fstab-generator[3345]: Ignoring "noauto" option for root device
	[Jan27 13:35] kauditd_printk_skb: 110 callbacks suppressed
	[ +11.369274] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [2c47657543862e30cf3cd220ed62287f082f2ca7a6e4a0382e180e6817c9774a] <==
	{"level":"warn","ts":"2025-01-27T13:35:16.269892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.684462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:16.269914Z","caller":"traceutil/trace.go:171","msg":"trace[250779979] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:511; }","duration":"425.736714ms","start":"2025-01-27T13:35:15.844171Z","end":"2025-01-27T13:35:16.269908Z","steps":["trace[250779979] 'agreement among raft nodes before linearized reading'  (duration: 425.651133ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:35:16.269935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:35:15.844152Z","time spent":"425.779529ms","remote":"127.0.0.1:37734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2025-01-27T13:35:16.287933Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:35:15.658440Z","time spent":"609.634468ms","remote":"127.0.0.1:37888","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:510 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-01-27T13:35:21.235018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.34703ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11046085960210437682 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.7\" mod_revision:505 > success:<request_put:<key:\"/registry/masterleases/192.168.39.7\" value_size:65 lease:1822713923355661872 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.7\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-27T13:35:21.235133Z","caller":"traceutil/trace.go:171","msg":"trace[1381357674] linearizableReadLoop","detail":"{readStateIndex:544; appliedIndex:543; }","duration":"297.885552ms","start":"2025-01-27T13:35:20.937232Z","end":"2025-01-27T13:35:21.235118Z","steps":["trace[1381357674] 'read index received'  (duration: 45.160142ms)","trace[1381357674] 'applied index is now lower than readState.Index'  (duration: 252.723914ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:35:21.235228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.987676ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:21.235247Z","caller":"traceutil/trace.go:171","msg":"trace[897482056] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:527; }","duration":"298.0414ms","start":"2025-01-27T13:35:20.937200Z","end":"2025-01-27T13:35:21.235241Z","steps":["trace[897482056] 'agreement among raft nodes before linearized reading'  (duration: 297.959166ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:35:21.235410Z","caller":"traceutil/trace.go:171","msg":"trace[1510627463] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"382.953923ms","start":"2025-01-27T13:35:20.852447Z","end":"2025-01-27T13:35:21.235401Z","steps":["trace[1510627463] 'process raft request'  (duration: 129.892257ms)","trace[1510627463] 'compare'  (duration: 252.210203ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-27T13:35:21.235502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:35:20.852435Z","time spent":"383.019883ms","remote":"127.0.0.1:37770","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":116,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.7\" mod_revision:505 > success:<request_put:<key:\"/registry/masterleases/192.168.39.7\" value_size:65 lease:1822713923355661872 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.7\" > >"}
	{"level":"warn","ts":"2025-01-27T13:35:21.752581Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.284686ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-27T13:35:21.752687Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"415.489642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T13:35:21.752767Z","caller":"traceutil/trace.go:171","msg":"trace[904500180] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:527; }","duration":"415.597797ms","start":"2025-01-27T13:35:21.337155Z","end":"2025-01-27T13:35:21.752753Z","steps":["trace[904500180] 'range keys from in-memory index tree'  (duration: 415.395081ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T13:35:21.752805Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T13:35:21.337142Z","time spent":"415.653938ms","remote":"127.0.0.1:37914","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-01-27T13:35:21.752701Z","caller":"traceutil/trace.go:171","msg":"trace[1086168314] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:527; }","duration":"186.418174ms","start":"2025-01-27T13:35:21.566268Z","end":"2025-01-27T13:35:21.752686Z","steps":["trace[1086168314] 'range keys from in-memory index tree'  (duration: 186.165019ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:35:22.422625Z","caller":"traceutil/trace.go:171","msg":"trace[1392422139] transaction","detail":"{read_only:false; response_revision:528; number_of_response:1; }","duration":"105.674178ms","start":"2025-01-27T13:35:22.316933Z","end":"2025-01-27T13:35:22.422607Z","steps":["trace[1392422139] 'process raft request'  (duration: 105.404561ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T13:44:47.660998Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":830}
	{"level":"info","ts":"2025-01-27T13:44:47.693493Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":830,"took":"32.130909ms","hash":4103654626,"current-db-size-bytes":3002368,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":3002368,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-01-27T13:44:47.693590Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4103654626,"revision":830,"compact-revision":-1}
	{"level":"info","ts":"2025-01-27T13:49:47.668062Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1082}
	{"level":"info","ts":"2025-01-27T13:49:47.672355Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1082,"took":"3.882875ms","hash":2455677574,"current-db-size-bytes":3002368,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1777664,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:49:47.672404Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2455677574,"revision":1082,"compact-revision":830}
	{"level":"info","ts":"2025-01-27T13:54:47.674616Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1334}
	{"level":"info","ts":"2025-01-27T13:54:47.678963Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1334,"took":"3.934428ms","hash":3004441456,"current-db-size-bytes":3002368,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1826816,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-27T13:54:47.679013Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3004441456,"revision":1334,"compact-revision":1082}
	
	
	==> kernel <==
	 13:54:56 up 26 min,  0 users,  load average: 0.06, 0.14, 0.18
	Linux embed-certs-174381 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [15884f8d140f189530498c89f50dd9fda5897d7079f30ed7030fb33297714036] <==
	I0127 13:34:39.940946       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:34:39.940979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:34:39.950051       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:39.959988       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.060133       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.101315       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.120702       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.152668       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.211391       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.249834       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.249958       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.260933       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.262273       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.272060       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.277697       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.324626       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.390830       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.400658       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.425834       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.433672       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.464423       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.518994       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.676544       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:40.680184       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0127 13:34:41.012292       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8b27ce31f5451880e28035ff21e918548dd1963b152659597fbd744ea190db77] <==
	I0127 13:50:50.140203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:50:50.140296       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:52:50.141229       1 handler_proxy.go:99] no RequestInfo found in the context
	W0127 13:52:50.141230       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:52:50.141653       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0127 13:52:50.141889       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:52:50.142931       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:52:50.142965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0127 13:54:49.138057       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:54:49.138234       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0127 13:54:50.140328       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:54:50.140394       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0127 13:54:50.140438       1 handler_proxy.go:99] no RequestInfo found in the context
	E0127 13:54:50.140515       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0127 13:54:50.141535       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0127 13:54:50.141594       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [718423e950e47aded11a5a88caa034d0993357baf0abfca8a57fd4f78b84907a] <==
	I0127 13:49:58.704713       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-174381"
	E0127 13:50:25.866823       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:25.921671       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:50:55.873305       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:50:55.929228       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:51:22.349691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="331.019µs"
	E0127 13:51:25.880057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:51:25.935776       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0127 13:51:29.500004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="627.702µs"
	I0127 13:51:32.078576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="107.597µs"
	I0127 13:51:44.082602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="82.253µs"
	E0127 13:51:55.887165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:51:55.944789       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:52:25.893062       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:52:25.952627       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:52:55.899996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:52:55.960454       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:53:25.906817       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:53:25.967789       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:53:55.913835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:53:55.974362       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:54:25.920238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:54:25.982036       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0127 13:54:55.927551       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0127 13:54:55.990374       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [fab57403598ff461ac338d9860621cc34417fa4df166cc9be49c667666d0f80d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 13:34:57.558780       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 13:34:57.581531       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	E0127 13:34:57.581615       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 13:34:57.692150       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 13:34:57.692199       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 13:34:57.692221       1 server_linux.go:170] "Using iptables Proxier"
	I0127 13:34:57.713168       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 13:34:57.713467       1 server.go:497] "Version info" version="v1.32.1"
	I0127 13:34:57.716096       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 13:34:57.721075       1 config.go:199] "Starting service config controller"
	I0127 13:34:57.721131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 13:34:57.721164       1 config.go:105] "Starting endpoint slice config controller"
	I0127 13:34:57.721205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 13:34:57.721973       1 config.go:329] "Starting node config controller"
	I0127 13:34:57.721998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 13:34:57.822947       1 shared_informer.go:320] Caches are synced for node config
	I0127 13:34:57.822959       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 13:34:57.822949       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bc0ff8f0ff9f19054f1d06ce383406166899902d86c04c9c66d020b4f6bd9cc4] <==
	W0127 13:34:49.161787       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 13:34:49.161813       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:49.161965       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 13:34:49.162064       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:49.162130       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 13:34:49.162159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:49.972281       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 13:34:49.972336       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.107461       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 13:34:50.107740       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.242530       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 13:34:50.242661       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.276017       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 13:34:50.276117       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.284103       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 13:34:50.284206       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.372096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 13:34:50.372199       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.375626       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0127 13:34:50.375701       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.393605       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 13:34:50.393686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 13:34:50.402411       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 13:34:50.402497       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 13:34:50.842777       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 13:54:10 embed-certs-174381 kubelet[3185]: E0127 13:54:10.065578    3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-hxlwf" podUID="cb779c78-85f9-48e7-88c3-f087f57547e3"
	Jan 27 13:54:12 embed-certs-174381 kubelet[3185]: E0127 13:54:12.360042    3185 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986052359547651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:12 embed-certs-174381 kubelet[3185]: E0127 13:54:12.360330    3185 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986052359547651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:21 embed-certs-174381 kubelet[3185]: I0127 13:54:21.062181    3185 scope.go:117] "RemoveContainer" containerID="5b72cf3fa4f8149ee6eb5119a64362fa3de8e2918e97a0bcc9658d15bf83fe20"
	Jan 27 13:54:21 embed-certs-174381 kubelet[3185]: E0127 13:54:21.062591    3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kfgkj_kubernetes-dashboard(96b88db1-1c62-4298-9ad1-437085020af8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kfgkj" podUID="96b88db1-1c62-4298-9ad1-437085020af8"
	Jan 27 13:54:21 embed-certs-174381 kubelet[3185]: E0127 13:54:21.063764    3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-hxlwf" podUID="cb779c78-85f9-48e7-88c3-f087f57547e3"
	Jan 27 13:54:22 embed-certs-174381 kubelet[3185]: E0127 13:54:22.361913    3185 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986062361230120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:22 embed-certs-174381 kubelet[3185]: E0127 13:54:22.362252    3185 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986062361230120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:32 embed-certs-174381 kubelet[3185]: I0127 13:54:32.062284    3185 scope.go:117] "RemoveContainer" containerID="5b72cf3fa4f8149ee6eb5119a64362fa3de8e2918e97a0bcc9658d15bf83fe20"
	Jan 27 13:54:32 embed-certs-174381 kubelet[3185]: E0127 13:54:32.062467    3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kfgkj_kubernetes-dashboard(96b88db1-1c62-4298-9ad1-437085020af8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kfgkj" podUID="96b88db1-1c62-4298-9ad1-437085020af8"
	Jan 27 13:54:32 embed-certs-174381 kubelet[3185]: E0127 13:54:32.363961    3185 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986072363583919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:32 embed-certs-174381 kubelet[3185]: E0127 13:54:32.364076    3185 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986072363583919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:36 embed-certs-174381 kubelet[3185]: E0127 13:54:36.066835    3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-hxlwf" podUID="cb779c78-85f9-48e7-88c3-f087f57547e3"
	Jan 27 13:54:42 embed-certs-174381 kubelet[3185]: E0127 13:54:42.366052    3185 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986082365747166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:42 embed-certs-174381 kubelet[3185]: E0127 13:54:42.366085    3185 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986082365747166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:44 embed-certs-174381 kubelet[3185]: I0127 13:54:44.063038    3185 scope.go:117] "RemoveContainer" containerID="5b72cf3fa4f8149ee6eb5119a64362fa3de8e2918e97a0bcc9658d15bf83fe20"
	Jan 27 13:54:44 embed-certs-174381 kubelet[3185]: E0127 13:54:44.065059    3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-kfgkj_kubernetes-dashboard(96b88db1-1c62-4298-9ad1-437085020af8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-kfgkj" podUID="96b88db1-1c62-4298-9ad1-437085020af8"
	Jan 27 13:54:50 embed-certs-174381 kubelet[3185]: E0127 13:54:50.065033    3185 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-hxlwf" podUID="cb779c78-85f9-48e7-88c3-f087f57547e3"
	Jan 27 13:54:52 embed-certs-174381 kubelet[3185]: E0127 13:54:52.105739    3185 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 13:54:52 embed-certs-174381 kubelet[3185]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 13:54:52 embed-certs-174381 kubelet[3185]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 13:54:52 embed-certs-174381 kubelet[3185]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 13:54:52 embed-certs-174381 kubelet[3185]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 13:54:52 embed-certs-174381 kubelet[3185]: E0127 13:54:52.367967    3185 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986092367596731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 27 13:54:52 embed-certs-174381 kubelet[3185]: E0127 13:54:52.368012    3185 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986092367596731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185716,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [23bcd6316e521e4e42d783b7c2cd4308138a6fbce6a8ca48d7cc7b73c2cd9861] <==
	2025/01/27 13:42:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:43:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:44:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:45:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:46:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:47:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:48:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:49:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:50:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:51:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:51:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:52:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:52:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:53:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:53:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:54:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/27 13:54:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [69839fcf5327e62d4ae92d2c9bcabd851ee2194c7b2f3f910c78fd577c97f35a] <==
	I0127 13:34:59.420390       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 13:34:59.496451       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 13:34:59.496818       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 13:34:59.529542       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 13:34:59.536354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-174381_2081c6aa-0a24-4838-8818-fba6aa42023d!
	I0127 13:34:59.545380       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24776326-9ce5-4908-b22e-b5ba838d1fe9", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-174381_2081c6aa-0a24-4838-8818-fba6aa42023d became leader
	I0127 13:34:59.638813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-174381_2081c6aa-0a24-4838-8818-fba6aa42023d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-174381 -n embed-certs-174381
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-174381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-hxlwf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-174381 describe pod metrics-server-f79f97bbb-hxlwf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-174381 describe pod metrics-server-f79f97bbb-hxlwf: exit status 1 (60.331607ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-hxlwf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-174381 describe pod metrics-server-f79f97bbb-hxlwf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1599.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-838260 create -f testdata/busybox.yaml
E0127 13:28:37.522453  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-838260 create -f testdata/busybox.yaml: exit status 1 (49.365409ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-838260" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-838260 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 6 (227.185763ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 13:28:37.767901  426484 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-838260" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-838260" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 6 (282.972036ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 13:28:38.051241  426514 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-838260" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-838260" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-838260 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 13:28:40.084816  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:40.522281  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:45.207116  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:50.764619  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:28:55.449435  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:01.186489  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:11.246687  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:15.931769  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:19.965802  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:19.972208  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:19.983577  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:20.004975  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:20.046335  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:20.127744  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:20.289482  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:20.611204  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:21.253052  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:22.534828  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:24.910393  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:25.096095  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:30.218466  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.287273  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.293617  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.305008  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.326401  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.367784  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.449228  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.610827  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:31.932404  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:32.574606  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:33.856515  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:36.418685  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:39.547622  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:40.460493  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:41.540238  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:51.782415  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:52.208362  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:56.893871  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:29:58.265243  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:00.942077  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:12.264141  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-838260 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.193642822s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-838260 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-838260 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-838260 describe deploy/metrics-server -n kube-system: exit status 1 (45.709292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-838260" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-838260 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 6 (228.478917ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 13:30:15.519856  427022 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-838260" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-838260" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (514.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-838260 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0127 13:30:23.108722  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:41.904210  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:30:53.225973  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:31:08.030914  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:31:14.129733  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:31:18.816195  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:31:41.051286  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:03.826426  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:08.752029  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:14.404319  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:15.147800  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:31.101213  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:39.248525  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:32:42.107293  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:06.950623  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:30.267031  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:33:34.953108  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-838260 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m32.746443346s)

                                                
                                                
-- stdout --
	* [old-k8s-version-838260] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-838260" primary control-plane node in "old-k8s-version-838260" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-838260" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:30:21.060696  427154 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:30:21.060860  427154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:30:21.060873  427154 out.go:358] Setting ErrFile to fd 2...
	I0127 13:30:21.060880  427154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:30:21.061067  427154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:30:21.061603  427154 out.go:352] Setting JSON to false
	I0127 13:30:21.062581  427154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22361,"bootTime":1737962260,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:30:21.062692  427154 start.go:139] virtualization: kvm guest
	I0127 13:30:21.064657  427154 out.go:177] * [old-k8s-version-838260] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:30:21.066061  427154 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:30:21.066091  427154 notify.go:220] Checking for updates...
	I0127 13:30:21.068220  427154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:30:21.069605  427154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:30:21.070819  427154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:30:21.072037  427154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:30:21.073231  427154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:30:21.074704  427154 config.go:182] Loaded profile config "old-k8s-version-838260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 13:30:21.075123  427154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:30:21.075205  427154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:30:21.091067  427154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I0127 13:30:21.091447  427154 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:30:21.091997  427154 main.go:141] libmachine: Using API Version  1
	I0127 13:30:21.092024  427154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:30:21.092362  427154 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:30:21.092532  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:21.094181  427154 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0127 13:30:21.095339  427154 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:30:21.095680  427154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:30:21.095716  427154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:30:21.110739  427154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
	I0127 13:30:21.111143  427154 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:30:21.111586  427154 main.go:141] libmachine: Using API Version  1
	I0127 13:30:21.111610  427154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:30:21.111903  427154 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:30:21.112077  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:21.147990  427154 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:30:21.149228  427154 start.go:297] selected driver: kvm2
	I0127 13:30:21.149247  427154 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
38260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:30:21.149405  427154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:30:21.150363  427154 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:30:21.150483  427154 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:30:21.167444  427154 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:30:21.167804  427154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 13:30:21.167836  427154 cni.go:84] Creating CNI manager for ""
	I0127 13:30:21.167883  427154 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:30:21.167912  427154 start.go:340] cluster config:
	{Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:30:21.168018  427154 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:30:21.170147  427154 out.go:177] * Starting "old-k8s-version-838260" primary control-plane node in "old-k8s-version-838260" cluster
	I0127 13:30:21.171175  427154 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:30:21.171214  427154 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 13:30:21.171227  427154 cache.go:56] Caching tarball of preloaded images
	I0127 13:30:21.171306  427154 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:30:21.171316  427154 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0127 13:30:21.171407  427154 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/config.json ...
	I0127 13:30:21.171583  427154 start.go:360] acquireMachinesLock for old-k8s-version-838260: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:30:21.171624  427154 start.go:364] duration metric: took 23.986µs to acquireMachinesLock for "old-k8s-version-838260"
	I0127 13:30:21.171639  427154 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:30:21.171647  427154 fix.go:54] fixHost starting: 
	I0127 13:30:21.171892  427154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:30:21.171924  427154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:30:21.185713  427154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0127 13:30:21.186193  427154 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:30:21.186743  427154 main.go:141] libmachine: Using API Version  1
	I0127 13:30:21.186766  427154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:30:21.187130  427154 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:30:21.187334  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:21.187547  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetState
	I0127 13:30:21.189066  427154 fix.go:112] recreateIfNeeded on old-k8s-version-838260: state=Stopped err=<nil>
	I0127 13:30:21.189102  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	W0127 13:30:21.189254  427154 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:30:21.191346  427154 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-838260" ...
	I0127 13:30:21.192480  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .Start
	I0127 13:30:21.192652  427154 main.go:141] libmachine: (old-k8s-version-838260) starting domain...
	I0127 13:30:21.192674  427154 main.go:141] libmachine: (old-k8s-version-838260) ensuring networks are active...
	I0127 13:30:21.193407  427154 main.go:141] libmachine: (old-k8s-version-838260) Ensuring network default is active
	I0127 13:30:21.193726  427154 main.go:141] libmachine: (old-k8s-version-838260) Ensuring network mk-old-k8s-version-838260 is active
	I0127 13:30:21.194119  427154 main.go:141] libmachine: (old-k8s-version-838260) getting domain XML...
	I0127 13:30:21.194931  427154 main.go:141] libmachine: (old-k8s-version-838260) creating domain...
	I0127 13:30:22.473181  427154 main.go:141] libmachine: (old-k8s-version-838260) waiting for IP...
	I0127 13:30:22.474141  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:22.474654  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:22.474739  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:22.474654  427206 retry.go:31] will retry after 221.198218ms: waiting for domain to come up
	I0127 13:30:22.697105  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:22.697797  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:22.697816  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:22.697769  427206 retry.go:31] will retry after 291.436766ms: waiting for domain to come up
	I0127 13:30:22.991315  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:22.991909  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:22.991945  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:22.991856  427206 retry.go:31] will retry after 411.156671ms: waiting for domain to come up
	I0127 13:30:23.404217  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:23.404809  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:23.404836  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:23.404766  427206 retry.go:31] will retry after 534.447837ms: waiting for domain to come up
	I0127 13:30:23.941274  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:23.941945  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:23.941978  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:23.941893  427206 retry.go:31] will retry after 686.062822ms: waiting for domain to come up
	I0127 13:30:24.629276  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:24.629781  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:24.629824  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:24.629740  427206 retry.go:31] will retry after 917.384373ms: waiting for domain to come up
	I0127 13:30:25.548274  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:25.548841  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:25.548864  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:25.548782  427206 retry.go:31] will retry after 1.15119368s: waiting for domain to come up
	I0127 13:30:26.701386  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:26.701965  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:26.702005  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:26.701950  427206 retry.go:31] will retry after 1.1176216s: waiting for domain to come up
	I0127 13:30:27.821253  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:27.821765  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:27.821794  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:27.821732  427206 retry.go:31] will retry after 1.497774401s: waiting for domain to come up
	I0127 13:30:29.321550  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:29.322205  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:29.322238  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:29.322157  427206 retry.go:31] will retry after 1.957258662s: waiting for domain to come up
	I0127 13:30:31.281594  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:31.282084  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:31.282114  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:31.282044  427206 retry.go:31] will retry after 1.775499813s: waiting for domain to come up
	I0127 13:30:33.059703  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:33.060239  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:33.060269  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:33.060203  427206 retry.go:31] will retry after 2.530058761s: waiting for domain to come up
	I0127 13:30:35.591740  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:35.592203  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | unable to find current IP address of domain old-k8s-version-838260 in network mk-old-k8s-version-838260
	I0127 13:30:35.592233  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | I0127 13:30:35.592160  427206 retry.go:31] will retry after 4.088544234s: waiting for domain to come up
	I0127 13:30:39.683591  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.684048  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has current primary IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.684070  427154 main.go:141] libmachine: (old-k8s-version-838260) found domain IP: 192.168.61.159
	I0127 13:30:39.684117  427154 main.go:141] libmachine: (old-k8s-version-838260) reserving static IP address...
	I0127 13:30:39.684547  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "old-k8s-version-838260", mac: "52:54:00:9d:07:25", ip: "192.168.61.159"} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:39.684577  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | skip adding static IP to network mk-old-k8s-version-838260 - found existing host DHCP lease matching {name: "old-k8s-version-838260", mac: "52:54:00:9d:07:25", ip: "192.168.61.159"}
	I0127 13:30:39.684602  427154 main.go:141] libmachine: (old-k8s-version-838260) reserved static IP address 192.168.61.159 for domain old-k8s-version-838260
	I0127 13:30:39.684620  427154 main.go:141] libmachine: (old-k8s-version-838260) waiting for SSH...
	I0127 13:30:39.684632  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | Getting to WaitForSSH function...
	I0127 13:30:39.686778  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.687105  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:39.687140  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.687263  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | Using SSH client type: external
	I0127 13:30:39.687289  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa (-rw-------)
	I0127 13:30:39.687321  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.159 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:30:39.687404  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | About to run SSH command:
	I0127 13:30:39.687424  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | exit 0
	I0127 13:30:39.810845  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | SSH cmd err, output: <nil>: 
	I0127 13:30:39.811227  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetConfigRaw
	I0127 13:30:39.811972  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:30:39.814506  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.814904  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:39.814925  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.815157  427154 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/config.json ...
	I0127 13:30:39.815395  427154 machine.go:93] provisionDockerMachine start ...
	I0127 13:30:39.815417  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:39.815615  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:39.817642  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.817936  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:39.817981  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.818105  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:39.818271  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:39.818445  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:39.818579  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:39.818738  427154 main.go:141] libmachine: Using SSH client type: native
	I0127 13:30:39.818929  427154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:30:39.818939  427154 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:30:39.927021  427154 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:30:39.927053  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetMachineName
	I0127 13:30:39.927328  427154 buildroot.go:166] provisioning hostname "old-k8s-version-838260"
	I0127 13:30:39.927362  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetMachineName
	I0127 13:30:39.927563  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:39.930259  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.930693  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:39.930728  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:39.930875  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:39.931052  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:39.931232  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:39.931371  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:39.931513  427154 main.go:141] libmachine: Using SSH client type: native
	I0127 13:30:39.931691  427154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:30:39.931713  427154 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-838260 && echo "old-k8s-version-838260" | sudo tee /etc/hostname
	I0127 13:30:40.051255  427154 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-838260
	
	I0127 13:30:40.051293  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.054201  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.054552  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.054612  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.054737  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:40.054934  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.055133  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.055297  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:40.055461  427154 main.go:141] libmachine: Using SSH client type: native
	I0127 13:30:40.055627  427154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:30:40.055643  427154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-838260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-838260/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-838260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:30:40.165315  427154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:30:40.165369  427154 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:30:40.165400  427154 buildroot.go:174] setting up certificates
	I0127 13:30:40.165413  427154 provision.go:84] configureAuth start
	I0127 13:30:40.165427  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetMachineName
	I0127 13:30:40.165749  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:30:40.168586  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.168967  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.169000  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.169179  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.171497  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.171889  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.171933  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.172057  427154 provision.go:143] copyHostCerts
	I0127 13:30:40.172111  427154 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:30:40.172126  427154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:30:40.172191  427154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:30:40.172291  427154 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:30:40.172299  427154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:30:40.172323  427154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:30:40.172388  427154 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:30:40.172400  427154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:30:40.172422  427154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:30:40.172481  427154 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-838260 san=[127.0.0.1 192.168.61.159 localhost minikube old-k8s-version-838260]
	I0127 13:30:40.264740  427154 provision.go:177] copyRemoteCerts
	I0127 13:30:40.264796  427154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:30:40.264820  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.267981  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.268357  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.268399  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.268568  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:40.268773  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.268952  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:40.269103  427154 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:30:40.353793  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:30:40.381966  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0127 13:30:40.406000  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:30:40.432798  427154 provision.go:87] duration metric: took 267.371477ms to configureAuth
	I0127 13:30:40.432827  427154 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:30:40.433050  427154 config.go:182] Loaded profile config "old-k8s-version-838260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 13:30:40.433160  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.435880  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.436263  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.436293  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.436508  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:40.436709  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.436874  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.436994  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:40.437142  427154 main.go:141] libmachine: Using SSH client type: native
	I0127 13:30:40.437321  427154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:30:40.437335  427154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:30:40.684797  427154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:30:40.684829  427154 machine.go:96] duration metric: took 869.417103ms to provisionDockerMachine
	I0127 13:30:40.684845  427154 start.go:293] postStartSetup for "old-k8s-version-838260" (driver="kvm2")
	I0127 13:30:40.684860  427154 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:30:40.684897  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:40.685227  427154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:30:40.685259  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.687796  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.688199  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.688235  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.688368  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:40.688570  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.688728  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:40.688876  427154 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:30:40.774249  427154 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:30:40.779075  427154 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:30:40.779112  427154 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:30:40.779185  427154 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:30:40.779283  427154 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:30:40.779396  427154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:30:40.789438  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:30:40.814318  427154 start.go:296] duration metric: took 129.456001ms for postStartSetup
	I0127 13:30:40.814370  427154 fix.go:56] duration metric: took 19.642722303s for fixHost
	I0127 13:30:40.814396  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.817176  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.817539  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.817578  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.817793  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:40.817976  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.818119  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.818225  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:40.818401  427154 main.go:141] libmachine: Using SSH client type: native
	I0127 13:30:40.818582  427154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.159 22 <nil> <nil>}
	I0127 13:30:40.818594  427154 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:30:40.923314  427154 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984640.895280558
	
	I0127 13:30:40.923337  427154 fix.go:216] guest clock: 1737984640.895280558
	I0127 13:30:40.923344  427154 fix.go:229] Guest: 2025-01-27 13:30:40.895280558 +0000 UTC Remote: 2025-01-27 13:30:40.814375895 +0000 UTC m=+19.793713866 (delta=80.904663ms)
	I0127 13:30:40.923364  427154 fix.go:200] guest clock delta is within tolerance: 80.904663ms
	I0127 13:30:40.923375  427154 start.go:83] releasing machines lock for "old-k8s-version-838260", held for 19.751741266s
	I0127 13:30:40.923394  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:40.923640  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:30:40.926610  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.927030  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.927052  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.927245  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:40.927745  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:40.927930  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .DriverName
	I0127 13:30:40.928029  427154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:30:40.928084  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.928143  427154 ssh_runner.go:195] Run: cat /version.json
	I0127 13:30:40.928166  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHHostname
	I0127 13:30:40.930779  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.930875  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.931173  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.931222  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:40.931267  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.931301  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:40.931395  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:40.931573  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHPort
	I0127 13:30:40.931603  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.931752  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHKeyPath
	I0127 13:30:40.931781  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:40.931849  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetSSHUsername
	I0127 13:30:40.931951  427154 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:30:40.932021  427154 sshutil.go:53] new ssh client: &{IP:192.168.61.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/old-k8s-version-838260/id_rsa Username:docker}
	I0127 13:30:41.030864  427154 ssh_runner.go:195] Run: systemctl --version
	I0127 13:30:41.037447  427154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:30:41.183980  427154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:30:41.190868  427154 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:30:41.190923  427154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:30:41.208786  427154 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:30:41.208809  427154 start.go:495] detecting cgroup driver to use...
	I0127 13:30:41.208889  427154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:30:41.225829  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:30:41.241913  427154 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:30:41.241968  427154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:30:41.256156  427154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:30:41.270696  427154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:30:41.389801  427154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:30:41.528687  427154 docker.go:233] disabling docker service ...
	I0127 13:30:41.528765  427154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:30:41.547310  427154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:30:41.560866  427154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:30:41.709296  427154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:30:41.822273  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:30:41.837392  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:30:41.858419  427154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0127 13:30:41.858479  427154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:30:41.869781  427154 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:30:41.869837  427154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:30:41.881360  427154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:30:41.892026  427154 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:30:41.904014  427154 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:30:41.915589  427154 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:30:41.925621  427154 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:30:41.925662  427154 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:30:41.940102  427154 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:30:41.951168  427154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:30:42.077152  427154 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:30:42.177027  427154 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:30:42.177105  427154 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:30:42.181992  427154 start.go:563] Will wait 60s for crictl version
	I0127 13:30:42.182055  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:42.186084  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:30:42.229566  427154 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:30:42.229658  427154 ssh_runner.go:195] Run: crio --version
	I0127 13:30:42.265753  427154 ssh_runner.go:195] Run: crio --version
	I0127 13:30:42.301949  427154 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0127 13:30:42.303475  427154 main.go:141] libmachine: (old-k8s-version-838260) Calling .GetIP
	I0127 13:30:42.306565  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:42.306897  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:07:25", ip: ""} in network mk-old-k8s-version-838260: {Iface:virbr1 ExpiryTime:2025-01-27 14:30:33 +0000 UTC Type:0 Mac:52:54:00:9d:07:25 Iaid: IPaddr:192.168.61.159 Prefix:24 Hostname:old-k8s-version-838260 Clientid:01:52:54:00:9d:07:25}
	I0127 13:30:42.306928  427154 main.go:141] libmachine: (old-k8s-version-838260) DBG | domain old-k8s-version-838260 has defined IP address 192.168.61.159 and MAC address 52:54:00:9d:07:25 in network mk-old-k8s-version-838260
	I0127 13:30:42.307114  427154 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0127 13:30:42.311730  427154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:30:42.325703  427154 kubeadm.go:883] updating cluster {Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:30:42.325870  427154 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 13:30:42.325943  427154 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:30:42.377218  427154 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 13:30:42.377277  427154 ssh_runner.go:195] Run: which lz4
	I0127 13:30:42.381454  427154 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:30:42.385728  427154 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:30:42.385764  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0127 13:30:44.050823  427154 crio.go:462] duration metric: took 1.669412477s to copy over tarball
	I0127 13:30:44.050903  427154 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:30:46.970863  427154 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.919894144s)
	I0127 13:30:46.970901  427154 crio.go:469] duration metric: took 2.92004448s to extract the tarball
	I0127 13:30:46.970913  427154 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:30:47.014455  427154 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:30:47.053439  427154 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0127 13:30:47.053468  427154 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0127 13:30:47.053540  427154 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:30:47.053542  427154 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:30:47.053613  427154 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:30:47.053556  427154 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:30:47.053542  427154 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:30:47.053725  427154 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:30:47.053774  427154 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0127 13:30:47.053778  427154 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0127 13:30:47.057374  427154 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0127 13:30:47.057865  427154 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:30:47.057924  427154 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:30:47.057870  427154 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:30:47.057870  427154 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:30:47.058200  427154 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:30:47.057872  427154 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:30:47.057965  427154 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0127 13:30:47.230362  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0127 13:30:47.259704  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:30:47.284829  427154 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0127 13:30:47.284933  427154 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0127 13:30:47.284996  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:47.313638  427154 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0127 13:30:47.313697  427154 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:30:47.313732  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:30:47.313740  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:47.351479  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:30:47.351480  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:30:47.373255  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:30:47.376423  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0127 13:30:47.379809  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:30:47.383527  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0127 13:30:47.383845  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:30:47.436320  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:30:47.459797  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0127 13:30:47.533480  427154 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0127 13:30:47.533620  427154 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:30:47.533734  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:47.549766  427154 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0127 13:30:47.549820  427154 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0127 13:30:47.549866  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:47.606600  427154 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0127 13:30:47.606649  427154 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0127 13:30:47.606680  427154 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0127 13:30:47.606719  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:47.606723  427154 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:30:47.606767  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:47.606686  427154 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0127 13:30:47.606849  427154 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:30:47.606872  427154 ssh_runner.go:195] Run: which crictl
	I0127 13:30:47.611318  427154 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0127 13:30:47.611382  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:30:47.611415  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:30:47.611458  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0127 13:30:47.615076  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:30:47.615834  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:30:47.618283  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:30:47.736592  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:30:47.736671  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:30:47.736696  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:30:47.736790  427154 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0127 13:30:47.761234  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:30:47.767679  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:30:47.871214  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0127 13:30:47.871269  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0127 13:30:47.871300  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0127 13:30:47.871357  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0127 13:30:47.871521  427154 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0127 13:30:47.988765  427154 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0127 13:30:47.988812  427154 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0127 13:30:47.988861  427154 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0127 13:30:47.988861  427154 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0127 13:30:47.988896  427154 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0127 13:30:49.382986  427154 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:30:49.532893  427154 cache_images.go:92] duration metric: took 2.479406004s to LoadCachedImages
	W0127 13:30:49.533026  427154 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20317-361578/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0127 13:30:49.533049  427154 kubeadm.go:934] updating node { 192.168.61.159 8443 v1.20.0 crio true true} ...
	I0127 13:30:49.533206  427154 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-838260 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.159
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:30:49.533292  427154 ssh_runner.go:195] Run: crio config
	I0127 13:30:49.582006  427154 cni.go:84] Creating CNI manager for ""
	I0127 13:30:49.582036  427154 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:30:49.582048  427154 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 13:30:49.582077  427154 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.159 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-838260 NodeName:old-k8s-version-838260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.159"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.159 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0127 13:30:49.582235  427154 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.159
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-838260"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.159
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.159"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:30:49.582304  427154 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0127 13:30:49.593471  427154 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:30:49.593532  427154 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:30:49.603717  427154 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0127 13:30:49.620386  427154 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:30:49.636337  427154 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0127 13:30:49.653331  427154 ssh_runner.go:195] Run: grep 192.168.61.159	control-plane.minikube.internal$ /etc/hosts
	I0127 13:30:49.657154  427154 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.159	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:30:49.671734  427154 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:30:49.786395  427154 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:30:49.805509  427154 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260 for IP: 192.168.61.159
	I0127 13:30:49.805533  427154 certs.go:194] generating shared ca certs ...
	I0127 13:30:49.805552  427154 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:30:49.805723  427154 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:30:49.805780  427154 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:30:49.805793  427154 certs.go:256] generating profile certs ...
	I0127 13:30:49.805912  427154 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/client.key
	I0127 13:30:49.805986  427154 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key.552336b8
	I0127 13:30:49.806041  427154 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.key
	I0127 13:30:49.806203  427154 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:30:49.806247  427154 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:30:49.806257  427154 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:30:49.806297  427154 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:30:49.806334  427154 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:30:49.806367  427154 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:30:49.806422  427154 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:30:49.807187  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:30:49.858781  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:30:49.900302  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:30:49.936983  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:30:49.985080  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0127 13:30:50.015702  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 13:30:50.059726  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:30:50.092953  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/old-k8s-version-838260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:30:50.117186  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:30:50.144497  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:30:50.172900  427154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:30:50.196888  427154 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:30:50.213140  427154 ssh_runner.go:195] Run: openssl version
	I0127 13:30:50.219413  427154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:30:50.230200  427154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:30:50.234918  427154 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:30:50.234975  427154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:30:50.241326  427154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:30:50.253069  427154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:30:50.264376  427154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:30:50.269362  427154 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:30:50.269416  427154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:30:50.275408  427154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:30:50.285750  427154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:30:50.296322  427154 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:30:50.301436  427154 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:30:50.301530  427154 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:30:50.307254  427154 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:30:50.318596  427154 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:30:50.323051  427154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:30:50.328994  427154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:30:50.335023  427154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:30:50.340807  427154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:30:50.347305  427154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:30:50.354772  427154 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:30:50.361748  427154 kubeadm.go:392] StartCluster: {Name:old-k8s-version-838260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-838260 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.159 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:30:50.361846  427154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:30:50.361887  427154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:30:50.405756  427154 cri.go:89] found id: ""
	I0127 13:30:50.405835  427154 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:30:50.415883  427154 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:30:50.415903  427154 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:30:50.415947  427154 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:30:50.425341  427154 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:30:50.426109  427154 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-838260" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:30:50.426514  427154 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-838260" cluster setting kubeconfig missing "old-k8s-version-838260" context setting]
	I0127 13:30:50.427187  427154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:30:50.428785  427154 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:30:50.438277  427154 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.159
	I0127 13:30:50.438310  427154 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:30:50.438325  427154 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:30:50.438398  427154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:30:50.476509  427154 cri.go:89] found id: ""
	I0127 13:30:50.476578  427154 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:30:50.495013  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:30:50.506904  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:30:50.506928  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:30:50.506975  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:30:50.516501  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:30:50.516549  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:30:50.525755  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:30:50.534481  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:30:50.534530  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:30:50.544372  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:30:50.553459  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:30:50.553505  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:30:50.562811  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:30:50.571994  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:30:50.572055  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:30:50.581168  427154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:30:50.590352  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:30:50.718795  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:30:51.557022  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:30:51.779540  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:30:51.894749  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:30:51.999442  427154 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:30:51.999540  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:52.500557  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:52.999612  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:53.499806  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:54.000127  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:54.500101  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:55.000438  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:55.500092  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:55.999634  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:56.500292  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:56.999611  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:57.499909  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:58.000197  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:58.500225  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:58.999834  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:30:59.499676  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:00.000308  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:00.500153  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:00.999897  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:01.500439  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:01.999723  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:02.499707  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:02.999893  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:03.499818  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:04.000540  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:04.500201  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:05.000019  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:05.500414  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:06.000116  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:06.500342  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:07.000093  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:07.499578  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:08.000534  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:08.500416  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:09.000634  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:09.500489  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:10.000399  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:10.499771  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:11.000466  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:11.499693  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:12.000273  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:12.500141  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:12.999698  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:13.500044  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:14.000381  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:14.499979  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:15.000512  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:15.499654  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:15.999950  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:16.500230  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:17.000466  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:17.499665  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:17.999801  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:18.500093  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:19.000393  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:19.500491  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:20.000467  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:20.500625  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:21.000413  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:21.500481  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:22.000028  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:22.500593  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:22.999688  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:23.499837  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:24.000418  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:24.499704  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:25.000675  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:25.499714  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:26.000112  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:26.500072  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:27.000104  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:27.499730  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:28.000438  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:28.499591  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:29.000297  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:29.500437  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:29.999755  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:30.499826  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:30.999788  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:31.500018  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:32.000603  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:32.499943  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:33.000564  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:33.500609  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:34.000651  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:34.500550  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:34.999706  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:35.500597  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:35.999881  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:36.500564  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:37.000221  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:37.499750  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:38.000373  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:38.499786  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:39.000635  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:39.500246  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:40.000037  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:40.499895  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:40.999831  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:41.500504  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:42.000419  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:42.500465  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:43.000575  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:43.500418  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:44.000488  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:44.500402  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:45.000381  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:45.499656  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:46.000030  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:46.500283  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:46.999644  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:47.500229  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:47.999793  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:48.500211  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:48.999675  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:49.500245  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:49.999680  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:50.500522  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:51.000529  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:51.500493  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:52.000570  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:31:52.000652  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:31:52.045249  427154 cri.go:89] found id: ""
	I0127 13:31:52.045277  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.045286  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:31:52.045292  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:31:52.045349  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:31:52.080758  427154 cri.go:89] found id: ""
	I0127 13:31:52.080792  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.080800  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:31:52.080807  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:31:52.080871  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:31:52.114529  427154 cri.go:89] found id: ""
	I0127 13:31:52.114581  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.114595  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:31:52.114603  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:31:52.114667  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:31:52.150048  427154 cri.go:89] found id: ""
	I0127 13:31:52.150078  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.150087  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:31:52.150093  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:31:52.150146  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:31:52.190845  427154 cri.go:89] found id: ""
	I0127 13:31:52.190874  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.190884  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:31:52.190890  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:31:52.190942  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:31:52.227243  427154 cri.go:89] found id: ""
	I0127 13:31:52.227278  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.227290  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:31:52.227298  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:31:52.227365  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:31:52.264817  427154 cri.go:89] found id: ""
	I0127 13:31:52.264848  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.264860  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:31:52.264867  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:31:52.264945  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:31:52.299775  427154 cri.go:89] found id: ""
	I0127 13:31:52.299805  427154 logs.go:282] 0 containers: []
	W0127 13:31:52.299814  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:31:52.299824  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:31:52.299836  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:31:52.340719  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:31:52.340752  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:31:52.392908  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:31:52.392940  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:31:52.406449  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:31:52.406479  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:31:52.536138  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:31:52.536162  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:31:52.536180  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:31:55.111740  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:55.132716  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:31:55.132791  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:31:55.183164  427154 cri.go:89] found id: ""
	I0127 13:31:55.183219  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.183231  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:31:55.183240  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:31:55.183299  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:31:55.232635  427154 cri.go:89] found id: ""
	I0127 13:31:55.232666  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.232674  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:31:55.232680  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:31:55.232736  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:31:55.267497  427154 cri.go:89] found id: ""
	I0127 13:31:55.267525  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.267534  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:31:55.267540  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:31:55.267590  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:31:55.305253  427154 cri.go:89] found id: ""
	I0127 13:31:55.305285  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.305298  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:31:55.305305  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:31:55.305373  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:31:55.337117  427154 cri.go:89] found id: ""
	I0127 13:31:55.337158  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.337170  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:31:55.337177  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:31:55.337268  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:31:55.374716  427154 cri.go:89] found id: ""
	I0127 13:31:55.374749  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.374762  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:31:55.374771  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:31:55.374836  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:31:55.409574  427154 cri.go:89] found id: ""
	I0127 13:31:55.409607  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.409619  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:31:55.409628  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:31:55.409688  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:31:55.451297  427154 cri.go:89] found id: ""
	I0127 13:31:55.451353  427154 logs.go:282] 0 containers: []
	W0127 13:31:55.451365  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:31:55.451379  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:31:55.451399  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:31:55.534215  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:31:55.534255  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:31:55.580376  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:31:55.580416  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:31:55.633606  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:31:55.633643  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:31:55.647880  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:31:55.647915  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:31:55.724418  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:31:58.226281  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:31:58.239177  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:31:58.239246  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:31:58.275531  427154 cri.go:89] found id: ""
	I0127 13:31:58.275571  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.275585  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:31:58.275594  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:31:58.275658  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:31:58.309383  427154 cri.go:89] found id: ""
	I0127 13:31:58.309415  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.309428  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:31:58.309456  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:31:58.309531  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:31:58.343119  427154 cri.go:89] found id: ""
	I0127 13:31:58.343152  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.343174  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:31:58.343182  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:31:58.343248  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:31:58.382244  427154 cri.go:89] found id: ""
	I0127 13:31:58.382275  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.382303  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:31:58.382313  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:31:58.382394  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:31:58.420668  427154 cri.go:89] found id: ""
	I0127 13:31:58.420702  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.420712  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:31:58.420718  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:31:58.420778  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:31:58.456099  427154 cri.go:89] found id: ""
	I0127 13:31:58.456128  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.456137  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:31:58.456144  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:31:58.456201  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:31:58.488542  427154 cri.go:89] found id: ""
	I0127 13:31:58.488572  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.488581  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:31:58.488588  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:31:58.488639  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:31:58.522352  427154 cri.go:89] found id: ""
	I0127 13:31:58.522383  427154 logs.go:282] 0 containers: []
	W0127 13:31:58.522391  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:31:58.522402  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:31:58.522414  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:31:58.575833  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:31:58.575867  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:31:58.589617  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:31:58.589640  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:31:58.659816  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:31:58.659843  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:31:58.659860  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:31:58.746759  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:31:58.746804  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:01.291675  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:01.305557  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:01.305638  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:01.342853  427154 cri.go:89] found id: ""
	I0127 13:32:01.342879  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.342887  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:01.342894  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:01.342956  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:01.382434  427154 cri.go:89] found id: ""
	I0127 13:32:01.382466  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.382476  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:01.382483  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:01.382572  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:01.416636  427154 cri.go:89] found id: ""
	I0127 13:32:01.416668  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.416678  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:01.416684  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:01.416755  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:01.451439  427154 cri.go:89] found id: ""
	I0127 13:32:01.451469  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.451478  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:01.451483  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:01.451542  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:01.484300  427154 cri.go:89] found id: ""
	I0127 13:32:01.484328  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.484338  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:01.484346  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:01.484410  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:01.518859  427154 cri.go:89] found id: ""
	I0127 13:32:01.518895  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.518908  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:01.518916  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:01.518978  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:01.558052  427154 cri.go:89] found id: ""
	I0127 13:32:01.558078  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.558086  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:01.558091  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:01.558150  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:01.592568  427154 cri.go:89] found id: ""
	I0127 13:32:01.592598  427154 logs.go:282] 0 containers: []
	W0127 13:32:01.592609  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:01.592620  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:01.592638  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:01.645839  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:01.645875  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:01.659098  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:01.659127  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:01.731574  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:01.731607  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:01.731622  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:01.808574  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:01.808614  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:04.346667  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:04.361317  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:04.361378  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:04.396317  427154 cri.go:89] found id: ""
	I0127 13:32:04.396348  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.396358  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:04.396364  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:04.396416  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:04.429455  427154 cri.go:89] found id: ""
	I0127 13:32:04.429489  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.429501  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:04.429509  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:04.429560  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:04.464266  427154 cri.go:89] found id: ""
	I0127 13:32:04.464294  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.464304  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:04.464309  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:04.464361  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:04.498504  427154 cri.go:89] found id: ""
	I0127 13:32:04.498557  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.498570  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:04.498578  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:04.498632  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:04.536826  427154 cri.go:89] found id: ""
	I0127 13:32:04.536859  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.536869  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:04.536875  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:04.536927  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:04.574352  427154 cri.go:89] found id: ""
	I0127 13:32:04.574381  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.574396  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:04.574402  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:04.574451  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:04.609208  427154 cri.go:89] found id: ""
	I0127 13:32:04.609246  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.609254  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:04.609260  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:04.609313  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:04.644638  427154 cri.go:89] found id: ""
	I0127 13:32:04.644667  427154 logs.go:282] 0 containers: []
	W0127 13:32:04.644675  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:04.644686  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:04.644698  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:04.659065  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:04.659106  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:04.737773  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:04.737799  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:04.737811  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:04.812986  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:04.813023  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:04.854128  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:04.854158  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:07.408198  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:07.425378  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:07.425466  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:07.463460  427154 cri.go:89] found id: ""
	I0127 13:32:07.463492  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.463503  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:07.463511  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:07.463579  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:07.498158  427154 cri.go:89] found id: ""
	I0127 13:32:07.498190  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.498199  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:07.498206  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:07.498265  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:07.533353  427154 cri.go:89] found id: ""
	I0127 13:32:07.533392  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.533404  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:07.533424  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:07.533489  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:07.569153  427154 cri.go:89] found id: ""
	I0127 13:32:07.569183  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.569191  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:07.569197  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:07.569249  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:07.604759  427154 cri.go:89] found id: ""
	I0127 13:32:07.604794  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.604806  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:07.604814  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:07.604886  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:07.639281  427154 cri.go:89] found id: ""
	I0127 13:32:07.639311  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.639320  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:07.639326  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:07.639407  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:07.676859  427154 cri.go:89] found id: ""
	I0127 13:32:07.676885  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.676892  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:07.676898  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:07.676947  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:07.711298  427154 cri.go:89] found id: ""
	I0127 13:32:07.711323  427154 logs.go:282] 0 containers: []
	W0127 13:32:07.711331  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:07.711346  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:07.711359  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:07.766460  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:07.766495  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:07.779992  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:07.780020  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:07.846054  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:07.846084  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:07.846101  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:07.929779  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:07.929816  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:10.471075  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:10.486154  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:10.486220  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:10.520556  427154 cri.go:89] found id: ""
	I0127 13:32:10.520583  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.520592  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:10.520598  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:10.520657  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:10.557996  427154 cri.go:89] found id: ""
	I0127 13:32:10.558024  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.558034  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:10.558041  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:10.558115  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:10.594372  427154 cri.go:89] found id: ""
	I0127 13:32:10.594411  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.594422  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:10.594430  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:10.594498  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:10.628992  427154 cri.go:89] found id: ""
	I0127 13:32:10.629029  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.629040  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:10.629048  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:10.629131  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:10.665282  427154 cri.go:89] found id: ""
	I0127 13:32:10.665312  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.665320  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:10.665326  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:10.665378  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:10.700321  427154 cri.go:89] found id: ""
	I0127 13:32:10.700351  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.700359  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:10.700364  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:10.700414  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:10.741563  427154 cri.go:89] found id: ""
	I0127 13:32:10.741596  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.741607  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:10.741614  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:10.741676  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:10.777253  427154 cri.go:89] found id: ""
	I0127 13:32:10.777281  427154 logs.go:282] 0 containers: []
	W0127 13:32:10.777290  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:10.777299  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:10.777312  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:10.830507  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:10.830566  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:10.843862  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:10.843895  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:10.920393  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:10.920431  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:10.920448  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:10.997703  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:10.997745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:13.540296  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:13.554506  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:13.554600  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:13.589694  427154 cri.go:89] found id: ""
	I0127 13:32:13.589727  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.589738  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:13.589748  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:13.589815  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:13.626101  427154 cri.go:89] found id: ""
	I0127 13:32:13.626132  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.626141  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:13.626148  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:13.626201  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:13.660846  427154 cri.go:89] found id: ""
	I0127 13:32:13.660880  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.660891  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:13.660899  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:13.660959  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:13.705630  427154 cri.go:89] found id: ""
	I0127 13:32:13.705659  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.705668  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:13.705674  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:13.705740  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:13.744375  427154 cri.go:89] found id: ""
	I0127 13:32:13.744401  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.744410  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:13.744416  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:13.744466  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:13.783115  427154 cri.go:89] found id: ""
	I0127 13:32:13.783144  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.783156  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:13.783163  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:13.783228  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:13.820624  427154 cri.go:89] found id: ""
	I0127 13:32:13.820656  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.820673  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:13.820681  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:13.820750  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:13.856562  427154 cri.go:89] found id: ""
	I0127 13:32:13.856593  427154 logs.go:282] 0 containers: []
	W0127 13:32:13.856601  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:13.856612  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:13.856627  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:13.907557  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:13.907590  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:13.922623  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:13.922663  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:14.003276  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:14.003304  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:14.003321  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:14.081613  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:14.081651  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:16.620435  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:16.633598  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:16.633659  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:16.667938  427154 cri.go:89] found id: ""
	I0127 13:32:16.667961  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.667969  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:16.667974  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:16.668032  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:16.702274  427154 cri.go:89] found id: ""
	I0127 13:32:16.702303  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.702312  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:16.702318  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:16.702376  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:16.736189  427154 cri.go:89] found id: ""
	I0127 13:32:16.736221  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.736233  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:16.736242  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:16.736300  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:16.771929  427154 cri.go:89] found id: ""
	I0127 13:32:16.771956  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.771968  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:16.771976  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:16.772045  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:16.814009  427154 cri.go:89] found id: ""
	I0127 13:32:16.814045  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.814058  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:16.814066  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:16.814140  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:16.849170  427154 cri.go:89] found id: ""
	I0127 13:32:16.849215  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.849227  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:16.849236  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:16.849301  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:16.886125  427154 cri.go:89] found id: ""
	I0127 13:32:16.886153  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.886161  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:16.886167  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:16.886220  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:16.918869  427154 cri.go:89] found id: ""
	I0127 13:32:16.918902  427154 logs.go:282] 0 containers: []
	W0127 13:32:16.918912  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:16.918924  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:16.918940  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:17.000610  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:17.000652  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:17.042757  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:17.042793  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:17.092757  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:17.092792  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:17.105844  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:17.105872  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:17.184052  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:19.684531  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:19.698335  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:19.698413  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:19.732772  427154 cri.go:89] found id: ""
	I0127 13:32:19.732807  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.732823  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:19.732829  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:19.732890  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:19.766599  427154 cri.go:89] found id: ""
	I0127 13:32:19.766633  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.766644  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:19.766653  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:19.766738  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:19.802914  427154 cri.go:89] found id: ""
	I0127 13:32:19.802948  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.802959  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:19.802966  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:19.803040  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:19.837414  427154 cri.go:89] found id: ""
	I0127 13:32:19.837441  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.837449  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:19.837455  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:19.837513  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:19.874970  427154 cri.go:89] found id: ""
	I0127 13:32:19.874994  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.875002  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:19.875007  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:19.875073  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:19.911341  427154 cri.go:89] found id: ""
	I0127 13:32:19.911372  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.911381  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:19.911386  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:19.911451  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:19.944924  427154 cri.go:89] found id: ""
	I0127 13:32:19.944955  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.944966  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:19.944974  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:19.945033  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:19.981021  427154 cri.go:89] found id: ""
	I0127 13:32:19.981061  427154 logs.go:282] 0 containers: []
	W0127 13:32:19.981073  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:19.981085  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:19.981107  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:20.033819  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:20.033862  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:20.048069  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:20.048097  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:20.115292  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:20.115319  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:20.115337  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:20.197188  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:20.197229  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:22.739344  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:22.754930  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:22.755012  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:22.790984  427154 cri.go:89] found id: ""
	I0127 13:32:22.791022  427154 logs.go:282] 0 containers: []
	W0127 13:32:22.791035  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:22.791043  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:22.791122  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:22.831827  427154 cri.go:89] found id: ""
	I0127 13:32:22.831894  427154 logs.go:282] 0 containers: []
	W0127 13:32:22.831907  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:22.831915  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:22.831992  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:22.874723  427154 cri.go:89] found id: ""
	I0127 13:32:22.874754  427154 logs.go:282] 0 containers: []
	W0127 13:32:22.874765  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:22.874772  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:22.874834  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:22.914297  427154 cri.go:89] found id: ""
	I0127 13:32:22.914330  427154 logs.go:282] 0 containers: []
	W0127 13:32:22.914342  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:22.914349  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:22.914406  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:22.950377  427154 cri.go:89] found id: ""
	I0127 13:32:22.950413  427154 logs.go:282] 0 containers: []
	W0127 13:32:22.950425  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:22.950433  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:22.950497  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:22.986362  427154 cri.go:89] found id: ""
	I0127 13:32:22.986393  427154 logs.go:282] 0 containers: []
	W0127 13:32:22.986403  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:22.986409  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:22.986469  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:23.019492  427154 cri.go:89] found id: ""
	I0127 13:32:23.019526  427154 logs.go:282] 0 containers: []
	W0127 13:32:23.019537  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:23.019545  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:23.019620  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:23.060513  427154 cri.go:89] found id: ""
	I0127 13:32:23.060542  427154 logs.go:282] 0 containers: []
	W0127 13:32:23.060550  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:23.060567  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:23.060578  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:23.111057  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:23.111090  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:23.123940  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:23.123963  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:23.204848  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:23.204878  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:23.204903  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:23.285463  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:23.285519  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:25.824985  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:25.838043  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:25.838117  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:25.880942  427154 cri.go:89] found id: ""
	I0127 13:32:25.880972  427154 logs.go:282] 0 containers: []
	W0127 13:32:25.880980  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:25.880987  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:25.881064  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:25.920623  427154 cri.go:89] found id: ""
	I0127 13:32:25.920656  427154 logs.go:282] 0 containers: []
	W0127 13:32:25.920669  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:25.920676  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:25.920742  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:25.959589  427154 cri.go:89] found id: ""
	I0127 13:32:25.959625  427154 logs.go:282] 0 containers: []
	W0127 13:32:25.959639  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:25.959647  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:25.959704  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:25.996129  427154 cri.go:89] found id: ""
	I0127 13:32:25.996159  427154 logs.go:282] 0 containers: []
	W0127 13:32:25.996168  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:25.996173  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:25.996238  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:26.032479  427154 cri.go:89] found id: ""
	I0127 13:32:26.032514  427154 logs.go:282] 0 containers: []
	W0127 13:32:26.032525  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:26.032533  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:26.032609  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:26.066688  427154 cri.go:89] found id: ""
	I0127 13:32:26.066718  427154 logs.go:282] 0 containers: []
	W0127 13:32:26.066728  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:26.066790  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:26.066860  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:26.098904  427154 cri.go:89] found id: ""
	I0127 13:32:26.098938  427154 logs.go:282] 0 containers: []
	W0127 13:32:26.098949  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:26.098956  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:26.099019  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:26.133514  427154 cri.go:89] found id: ""
	I0127 13:32:26.133549  427154 logs.go:282] 0 containers: []
	W0127 13:32:26.133560  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:26.133572  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:26.133585  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:26.182909  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:26.182945  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:26.196083  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:26.196117  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:26.267235  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:26.267268  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:26.267286  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:26.340109  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:26.340145  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:28.882662  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:28.897852  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:28.897933  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:28.941771  427154 cri.go:89] found id: ""
	I0127 13:32:28.941800  427154 logs.go:282] 0 containers: []
	W0127 13:32:28.941809  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:28.941816  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:28.941868  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:28.995949  427154 cri.go:89] found id: ""
	I0127 13:32:28.995973  427154 logs.go:282] 0 containers: []
	W0127 13:32:28.995982  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:28.995988  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:28.996041  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:29.035379  427154 cri.go:89] found id: ""
	I0127 13:32:29.035415  427154 logs.go:282] 0 containers: []
	W0127 13:32:29.035429  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:29.035437  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:29.035501  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:29.072781  427154 cri.go:89] found id: ""
	I0127 13:32:29.072813  427154 logs.go:282] 0 containers: []
	W0127 13:32:29.072825  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:29.072832  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:29.072898  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:29.108403  427154 cri.go:89] found id: ""
	I0127 13:32:29.108437  427154 logs.go:282] 0 containers: []
	W0127 13:32:29.108448  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:29.108456  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:29.108515  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:29.146226  427154 cri.go:89] found id: ""
	I0127 13:32:29.146256  427154 logs.go:282] 0 containers: []
	W0127 13:32:29.146265  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:29.146270  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:29.146321  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:29.180669  427154 cri.go:89] found id: ""
	I0127 13:32:29.180705  427154 logs.go:282] 0 containers: []
	W0127 13:32:29.180718  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:29.180726  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:29.180795  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:29.213486  427154 cri.go:89] found id: ""
	I0127 13:32:29.213522  427154 logs.go:282] 0 containers: []
	W0127 13:32:29.213533  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:29.213548  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:29.213563  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:29.270463  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:29.270497  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:29.285433  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:29.285465  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:29.354238  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:29.354262  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:29.354277  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:29.443490  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:29.443527  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:31.985547  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:31.998685  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:31.998762  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:32.035430  427154 cri.go:89] found id: ""
	I0127 13:32:32.035457  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.035466  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:32.035474  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:32.035540  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:32.072356  427154 cri.go:89] found id: ""
	I0127 13:32:32.072386  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.072397  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:32.072408  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:32.072470  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:32.106392  427154 cri.go:89] found id: ""
	I0127 13:32:32.106424  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.106435  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:32.106443  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:32.106498  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:32.140768  427154 cri.go:89] found id: ""
	I0127 13:32:32.140796  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.140806  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:32.140813  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:32.140873  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:32.177858  427154 cri.go:89] found id: ""
	I0127 13:32:32.177889  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.177902  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:32.177911  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:32.177989  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:32.211854  427154 cri.go:89] found id: ""
	I0127 13:32:32.211884  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.211897  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:32.211905  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:32.211976  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:32.246599  427154 cri.go:89] found id: ""
	I0127 13:32:32.246634  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.246646  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:32.246654  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:32.246720  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:32.283961  427154 cri.go:89] found id: ""
	I0127 13:32:32.283991  427154 logs.go:282] 0 containers: []
	W0127 13:32:32.284002  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:32.284015  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:32.284030  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:32.337991  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:32.338026  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:32.352394  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:32.352418  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:32.436280  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:32.436304  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:32.436317  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:32.516636  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:32.516673  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:35.058674  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:35.072154  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:35.072221  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:35.105042  427154 cri.go:89] found id: ""
	I0127 13:32:35.105074  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.105083  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:35.105089  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:35.105151  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:35.142594  427154 cri.go:89] found id: ""
	I0127 13:32:35.142628  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.142638  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:35.142645  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:35.142709  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:35.178945  427154 cri.go:89] found id: ""
	I0127 13:32:35.178979  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.178991  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:35.178998  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:35.179066  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:35.213216  427154 cri.go:89] found id: ""
	I0127 13:32:35.213243  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.213251  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:35.213257  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:35.213327  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:35.245677  427154 cri.go:89] found id: ""
	I0127 13:32:35.245709  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.245721  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:35.245729  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:35.245796  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:35.280381  427154 cri.go:89] found id: ""
	I0127 13:32:35.280425  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.280439  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:35.280447  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:35.280500  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:35.317463  427154 cri.go:89] found id: ""
	I0127 13:32:35.317498  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.317510  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:35.317517  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:35.317589  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:35.364769  427154 cri.go:89] found id: ""
	I0127 13:32:35.364801  427154 logs.go:282] 0 containers: []
	W0127 13:32:35.364811  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:35.364824  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:35.364842  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:35.441517  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:35.441555  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:35.470656  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:35.470693  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:35.553516  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:35.553539  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:35.553554  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:35.641819  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:35.641860  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:38.188009  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:38.201816  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:38.201902  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:38.236501  427154 cri.go:89] found id: ""
	I0127 13:32:38.236527  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.236535  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:38.236541  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:38.236592  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:38.272056  427154 cri.go:89] found id: ""
	I0127 13:32:38.272089  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.272106  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:38.272114  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:38.272181  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:38.306226  427154 cri.go:89] found id: ""
	I0127 13:32:38.306257  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.306269  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:38.306277  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:38.306354  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:38.339548  427154 cri.go:89] found id: ""
	I0127 13:32:38.339577  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.339593  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:38.339600  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:38.339666  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:38.382113  427154 cri.go:89] found id: ""
	I0127 13:32:38.382137  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.382145  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:38.382150  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:38.382204  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:38.417464  427154 cri.go:89] found id: ""
	I0127 13:32:38.417492  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.417501  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:38.417507  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:38.417558  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:38.453085  427154 cri.go:89] found id: ""
	I0127 13:32:38.453112  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.453120  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:38.453129  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:38.453190  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:38.486805  427154 cri.go:89] found id: ""
	I0127 13:32:38.486831  427154 logs.go:282] 0 containers: []
	W0127 13:32:38.486839  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:38.486851  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:38.486862  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:38.566631  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:38.566664  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:38.608584  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:38.608625  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:38.660692  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:38.660724  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:38.674269  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:38.674299  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:38.749659  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:41.250828  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:41.264325  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:41.264405  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:41.300979  427154 cri.go:89] found id: ""
	I0127 13:32:41.301015  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.301027  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:41.301037  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:41.301129  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:41.340754  427154 cri.go:89] found id: ""
	I0127 13:32:41.340797  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.340810  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:41.340818  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:41.340878  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:41.377290  427154 cri.go:89] found id: ""
	I0127 13:32:41.377320  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.377333  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:41.377343  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:41.377411  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:41.415511  427154 cri.go:89] found id: ""
	I0127 13:32:41.415541  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.415551  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:41.415557  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:41.415612  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:41.449960  427154 cri.go:89] found id: ""
	I0127 13:32:41.449991  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.450000  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:41.450006  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:41.450063  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:41.486100  427154 cri.go:89] found id: ""
	I0127 13:32:41.486142  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.486151  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:41.486156  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:41.486221  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:41.521670  427154 cri.go:89] found id: ""
	I0127 13:32:41.521702  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.521713  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:41.521722  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:41.521799  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:41.558813  427154 cri.go:89] found id: ""
	I0127 13:32:41.558836  427154 logs.go:282] 0 containers: []
	W0127 13:32:41.558844  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:41.558853  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:41.558865  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:41.610640  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:41.610672  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:41.624440  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:41.624471  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:41.690533  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:41.690583  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:41.690602  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:41.765430  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:41.765466  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:44.306907  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:44.320423  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:44.320512  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:44.354570  427154 cri.go:89] found id: ""
	I0127 13:32:44.354601  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.354611  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:44.354618  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:44.354679  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:44.390164  427154 cri.go:89] found id: ""
	I0127 13:32:44.390192  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.390203  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:44.390211  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:44.390275  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:44.425813  427154 cri.go:89] found id: ""
	I0127 13:32:44.425849  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.425861  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:44.425868  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:44.425937  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:44.463572  427154 cri.go:89] found id: ""
	I0127 13:32:44.463602  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.463613  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:44.463621  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:44.463709  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:44.496891  427154 cri.go:89] found id: ""
	I0127 13:32:44.496922  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.496934  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:44.496942  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:44.497006  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:44.529356  427154 cri.go:89] found id: ""
	I0127 13:32:44.529387  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.529399  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:44.529408  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:44.529470  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:44.566674  427154 cri.go:89] found id: ""
	I0127 13:32:44.566701  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.566710  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:44.566715  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:44.566764  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:44.608223  427154 cri.go:89] found id: ""
	I0127 13:32:44.608258  427154 logs.go:282] 0 containers: []
	W0127 13:32:44.608269  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:44.608280  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:44.608293  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:44.667245  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:44.667277  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:44.682397  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:44.682421  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:44.753641  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:44.753671  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:44.753690  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:44.830321  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:44.830354  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:47.372845  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:47.386151  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:47.386211  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:47.424867  427154 cri.go:89] found id: ""
	I0127 13:32:47.424895  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.424903  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:47.424910  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:47.424974  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:47.464718  427154 cri.go:89] found id: ""
	I0127 13:32:47.464743  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.464751  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:47.464757  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:47.464817  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:47.498108  427154 cri.go:89] found id: ""
	I0127 13:32:47.498146  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.498159  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:47.498166  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:47.498237  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:47.537613  427154 cri.go:89] found id: ""
	I0127 13:32:47.537646  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.537658  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:47.537665  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:47.537745  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:47.574279  427154 cri.go:89] found id: ""
	I0127 13:32:47.574307  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.574315  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:47.574321  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:47.574385  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:47.614136  427154 cri.go:89] found id: ""
	I0127 13:32:47.614165  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.614174  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:47.614180  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:47.614233  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:47.647858  427154 cri.go:89] found id: ""
	I0127 13:32:47.647893  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.647905  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:47.647913  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:47.647978  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:47.679668  427154 cri.go:89] found id: ""
	I0127 13:32:47.679705  427154 logs.go:282] 0 containers: []
	W0127 13:32:47.679717  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:47.679730  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:47.679745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:47.730378  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:47.730414  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:47.745049  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:47.745083  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:47.814452  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:47.814484  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:47.814502  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:47.896140  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:47.896179  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:50.439877  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:50.454256  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:50.454338  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:50.488571  427154 cri.go:89] found id: ""
	I0127 13:32:50.488608  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.488620  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:50.488628  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:50.488696  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:50.522958  427154 cri.go:89] found id: ""
	I0127 13:32:50.522989  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.523001  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:50.523009  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:50.523095  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:50.559131  427154 cri.go:89] found id: ""
	I0127 13:32:50.559165  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.559174  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:50.559180  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:50.559235  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:50.593344  427154 cri.go:89] found id: ""
	I0127 13:32:50.593378  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.593395  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:50.593403  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:50.593468  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:50.628064  427154 cri.go:89] found id: ""
	I0127 13:32:50.628106  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.628119  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:50.628128  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:50.628191  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:50.663302  427154 cri.go:89] found id: ""
	I0127 13:32:50.663334  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.663343  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:50.663349  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:50.663411  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:50.695650  427154 cri.go:89] found id: ""
	I0127 13:32:50.695682  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.695693  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:50.695701  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:50.695765  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:50.738954  427154 cri.go:89] found id: ""
	I0127 13:32:50.738983  427154 logs.go:282] 0 containers: []
	W0127 13:32:50.738992  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:50.739002  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:50.739016  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:50.814949  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:50.814995  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:50.858390  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:50.858425  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:50.935276  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:50.935333  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:50.951120  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:50.951153  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:51.030473  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:53.532651  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:53.551250  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:53.551341  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:53.585659  427154 cri.go:89] found id: ""
	I0127 13:32:53.585696  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.585710  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:53.585719  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:53.585791  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:53.626106  427154 cri.go:89] found id: ""
	I0127 13:32:53.626148  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.626161  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:53.626169  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:53.626237  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:53.666550  427154 cri.go:89] found id: ""
	I0127 13:32:53.666585  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.666596  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:53.666603  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:53.666668  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:53.707519  427154 cri.go:89] found id: ""
	I0127 13:32:53.707551  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.707564  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:53.707572  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:53.707624  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:53.742620  427154 cri.go:89] found id: ""
	I0127 13:32:53.742656  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.742669  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:53.742677  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:53.742742  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:53.776712  427154 cri.go:89] found id: ""
	I0127 13:32:53.776745  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.776757  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:53.776766  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:53.776833  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:53.811252  427154 cri.go:89] found id: ""
	I0127 13:32:53.811286  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.811297  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:53.811305  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:53.811380  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:53.856600  427154 cri.go:89] found id: ""
	I0127 13:32:53.856641  427154 logs.go:282] 0 containers: []
	W0127 13:32:53.856653  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:53.856673  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:53.856690  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:53.909698  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:53.909737  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:53.926813  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:53.926860  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:54.006471  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:54.006502  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:54.006522  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:54.102852  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:54.102890  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:56.652633  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:56.667826  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:56.667917  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:56.716469  427154 cri.go:89] found id: ""
	I0127 13:32:56.716498  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.716510  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:56.716517  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:56.716581  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:56.753034  427154 cri.go:89] found id: ""
	I0127 13:32:56.753061  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.753070  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:56.753075  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:56.753128  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:56.794480  427154 cri.go:89] found id: ""
	I0127 13:32:56.794519  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.794551  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:56.794561  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:56.794628  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:56.831112  427154 cri.go:89] found id: ""
	I0127 13:32:56.831142  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.831153  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:56.831161  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:56.831224  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:56.867595  427154 cri.go:89] found id: ""
	I0127 13:32:56.867625  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.867633  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:56.867639  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:56.867698  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:56.902472  427154 cri.go:89] found id: ""
	I0127 13:32:56.902506  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.902520  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:56.902528  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:56.902605  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:32:56.937962  427154 cri.go:89] found id: ""
	I0127 13:32:56.938000  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.938011  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:32:56.938018  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:32:56.938082  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:32:56.973471  427154 cri.go:89] found id: ""
	I0127 13:32:56.973503  427154 logs.go:282] 0 containers: []
	W0127 13:32:56.973516  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:32:56.973529  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:32:56.973543  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:32:57.025357  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:32:57.025393  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:32:57.039756  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:32:57.039789  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:32:57.110569  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:32:57.110596  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:32:57.110610  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:32:57.191202  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:32:57.191233  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:32:59.747162  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:32:59.760397  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:32:59.760458  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:32:59.796213  427154 cri.go:89] found id: ""
	I0127 13:32:59.796252  427154 logs.go:282] 0 containers: []
	W0127 13:32:59.796265  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:32:59.796273  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:32:59.796328  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:32:59.832358  427154 cri.go:89] found id: ""
	I0127 13:32:59.832393  427154 logs.go:282] 0 containers: []
	W0127 13:32:59.832401  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:32:59.832407  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:32:59.832460  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:32:59.866632  427154 cri.go:89] found id: ""
	I0127 13:32:59.866665  427154 logs.go:282] 0 containers: []
	W0127 13:32:59.866676  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:32:59.866684  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:32:59.866747  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:32:59.901650  427154 cri.go:89] found id: ""
	I0127 13:32:59.901685  427154 logs.go:282] 0 containers: []
	W0127 13:32:59.901694  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:32:59.901700  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:32:59.901755  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:32:59.938791  427154 cri.go:89] found id: ""
	I0127 13:32:59.938818  427154 logs.go:282] 0 containers: []
	W0127 13:32:59.938880  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:32:59.938895  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:32:59.938965  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:32:59.975331  427154 cri.go:89] found id: ""
	I0127 13:32:59.975362  427154 logs.go:282] 0 containers: []
	W0127 13:32:59.975379  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:32:59.975387  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:32:59.975453  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:00.010454  427154 cri.go:89] found id: ""
	I0127 13:33:00.010489  427154 logs.go:282] 0 containers: []
	W0127 13:33:00.010502  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:00.010511  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:00.010598  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:00.052722  427154 cri.go:89] found id: ""
	I0127 13:33:00.052747  427154 logs.go:282] 0 containers: []
	W0127 13:33:00.052757  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:00.052768  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:00.052784  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:00.102310  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:00.102341  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:00.115922  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:00.115946  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:00.184346  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:00.184368  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:00.184382  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:00.263609  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:00.263645  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:02.805590  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:02.820810  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:02.820887  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:02.858765  427154 cri.go:89] found id: ""
	I0127 13:33:02.858801  427154 logs.go:282] 0 containers: []
	W0127 13:33:02.858814  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:02.858823  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:02.858887  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:02.896628  427154 cri.go:89] found id: ""
	I0127 13:33:02.896662  427154 logs.go:282] 0 containers: []
	W0127 13:33:02.896673  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:02.896681  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:02.896745  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:02.934059  427154 cri.go:89] found id: ""
	I0127 13:33:02.934094  427154 logs.go:282] 0 containers: []
	W0127 13:33:02.934106  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:02.934113  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:02.934195  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:02.975271  427154 cri.go:89] found id: ""
	I0127 13:33:02.975298  427154 logs.go:282] 0 containers: []
	W0127 13:33:02.975307  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:02.975313  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:02.975372  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:03.020124  427154 cri.go:89] found id: ""
	I0127 13:33:03.020157  427154 logs.go:282] 0 containers: []
	W0127 13:33:03.020170  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:03.020178  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:03.020249  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:03.060185  427154 cri.go:89] found id: ""
	I0127 13:33:03.060215  427154 logs.go:282] 0 containers: []
	W0127 13:33:03.060226  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:03.060235  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:03.060303  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:03.105005  427154 cri.go:89] found id: ""
	I0127 13:33:03.105038  427154 logs.go:282] 0 containers: []
	W0127 13:33:03.105050  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:03.105057  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:03.105127  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:03.152950  427154 cri.go:89] found id: ""
	I0127 13:33:03.152985  427154 logs.go:282] 0 containers: []
	W0127 13:33:03.152997  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:03.153011  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:03.153027  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:03.207612  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:03.207648  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:03.226008  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:03.226040  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:03.326168  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:03.326198  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:03.326213  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:03.412217  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:03.412256  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:05.985101  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:06.003831  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:06.003919  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:06.041406  427154 cri.go:89] found id: ""
	I0127 13:33:06.041441  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.041452  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:06.041461  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:06.041529  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:06.082210  427154 cri.go:89] found id: ""
	I0127 13:33:06.082245  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.082256  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:06.082263  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:06.082329  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:06.120998  427154 cri.go:89] found id: ""
	I0127 13:33:06.121031  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.121040  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:06.121055  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:06.121126  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:06.167707  427154 cri.go:89] found id: ""
	I0127 13:33:06.167739  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.167752  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:06.167759  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:06.167825  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:06.216252  427154 cri.go:89] found id: ""
	I0127 13:33:06.216288  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.216299  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:06.216307  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:06.216377  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:06.264991  427154 cri.go:89] found id: ""
	I0127 13:33:06.265026  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.265037  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:06.265045  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:06.265120  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:06.306161  427154 cri.go:89] found id: ""
	I0127 13:33:06.306200  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.306213  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:06.306221  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:06.306279  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:06.341944  427154 cri.go:89] found id: ""
	I0127 13:33:06.341979  427154 logs.go:282] 0 containers: []
	W0127 13:33:06.341991  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:06.342004  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:06.342024  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:06.384156  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:06.384188  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:06.439251  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:06.439282  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:06.456732  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:06.456759  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:06.533866  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:06.533889  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:06.533902  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:09.126659  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:09.144785  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:09.144861  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:09.187991  427154 cri.go:89] found id: ""
	I0127 13:33:09.188028  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.188039  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:09.188048  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:09.188116  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:09.252605  427154 cri.go:89] found id: ""
	I0127 13:33:09.252656  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.252676  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:09.252685  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:09.252750  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:09.299433  427154 cri.go:89] found id: ""
	I0127 13:33:09.299464  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.299477  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:09.299485  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:09.299552  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:09.335983  427154 cri.go:89] found id: ""
	I0127 13:33:09.336012  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.336022  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:09.336030  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:09.336110  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:09.376082  427154 cri.go:89] found id: ""
	I0127 13:33:09.376115  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.376126  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:09.376139  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:09.376203  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:09.416213  427154 cri.go:89] found id: ""
	I0127 13:33:09.416247  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.416259  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:09.416267  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:09.416337  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:09.463004  427154 cri.go:89] found id: ""
	I0127 13:33:09.463037  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.463049  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:09.463057  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:09.463202  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:09.498586  427154 cri.go:89] found id: ""
	I0127 13:33:09.498622  427154 logs.go:282] 0 containers: []
	W0127 13:33:09.498634  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:09.498648  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:09.498665  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:09.545378  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:09.545414  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:09.606029  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:09.606064  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:09.624239  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:09.624271  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:09.707766  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:09.707795  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:09.707812  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:12.289331  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:12.303709  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:12.303767  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:12.343468  427154 cri.go:89] found id: ""
	I0127 13:33:12.343493  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.343502  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:12.343508  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:12.343565  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:12.378857  427154 cri.go:89] found id: ""
	I0127 13:33:12.378889  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.378900  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:12.378909  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:12.378978  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:12.413030  427154 cri.go:89] found id: ""
	I0127 13:33:12.413060  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.413068  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:12.413075  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:12.413151  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:12.452889  427154 cri.go:89] found id: ""
	I0127 13:33:12.452926  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.452938  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:12.452946  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:12.453000  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:12.490236  427154 cri.go:89] found id: ""
	I0127 13:33:12.490270  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.490281  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:12.490290  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:12.490352  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:12.525466  427154 cri.go:89] found id: ""
	I0127 13:33:12.525497  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.525510  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:12.525520  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:12.525584  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:12.560848  427154 cri.go:89] found id: ""
	I0127 13:33:12.560879  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.560889  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:12.560895  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:12.560944  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:12.596637  427154 cri.go:89] found id: ""
	I0127 13:33:12.596672  427154 logs.go:282] 0 containers: []
	W0127 13:33:12.596683  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:12.596696  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:12.596718  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:12.648419  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:12.648456  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:12.662771  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:12.662803  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:12.734098  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:12.734122  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:12.734133  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:12.822813  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:12.822852  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:15.364193  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:15.380086  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:15.380162  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:15.419981  427154 cri.go:89] found id: ""
	I0127 13:33:15.420015  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.420028  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:15.420036  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:15.420098  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:15.469537  427154 cri.go:89] found id: ""
	I0127 13:33:15.469566  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.469574  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:15.469581  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:15.469638  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:15.510732  427154 cri.go:89] found id: ""
	I0127 13:33:15.510768  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.510781  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:15.510789  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:15.510855  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:15.550165  427154 cri.go:89] found id: ""
	I0127 13:33:15.550201  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.550212  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:15.550222  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:15.550284  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:15.586454  427154 cri.go:89] found id: ""
	I0127 13:33:15.586484  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.586497  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:15.586504  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:15.586577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:15.633861  427154 cri.go:89] found id: ""
	I0127 13:33:15.633894  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.633906  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:15.633915  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:15.633986  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:15.695637  427154 cri.go:89] found id: ""
	I0127 13:33:15.695665  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.695674  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:15.695681  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:15.695744  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:15.758686  427154 cri.go:89] found id: ""
	I0127 13:33:15.758721  427154 logs.go:282] 0 containers: []
	W0127 13:33:15.758733  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:15.758747  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:15.758762  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:15.829806  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:15.829844  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:15.844722  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:15.844750  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:15.921846  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:15.921878  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:15.921893  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:16.002428  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:16.002466  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:18.552079  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:18.569463  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:18.569541  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:18.618465  427154 cri.go:89] found id: ""
	I0127 13:33:18.618500  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.618514  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:18.618522  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:18.618600  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:18.658041  427154 cri.go:89] found id: ""
	I0127 13:33:18.658069  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.658079  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:18.658086  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:18.658135  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:18.693840  427154 cri.go:89] found id: ""
	I0127 13:33:18.693867  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.693875  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:18.693881  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:18.693932  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:18.725820  427154 cri.go:89] found id: ""
	I0127 13:33:18.725851  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.725862  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:18.725883  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:18.725952  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:18.761505  427154 cri.go:89] found id: ""
	I0127 13:33:18.761529  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.761538  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:18.761543  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:18.761598  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:18.799125  427154 cri.go:89] found id: ""
	I0127 13:33:18.799156  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.799168  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:18.799176  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:18.799251  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:18.834825  427154 cri.go:89] found id: ""
	I0127 13:33:18.834856  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.834866  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:18.834873  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:18.834935  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:18.873686  427154 cri.go:89] found id: ""
	I0127 13:33:18.873717  427154 logs.go:282] 0 containers: []
	W0127 13:33:18.873729  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:18.873743  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:18.873766  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:18.947486  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:18.947521  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:18.962114  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:18.962139  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:19.027979  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:19.028004  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:19.028021  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:19.108583  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:19.108619  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:21.647784  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:21.667379  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:21.667453  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:21.722351  427154 cri.go:89] found id: ""
	I0127 13:33:21.722387  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.722399  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:21.722407  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:21.722471  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:21.757793  427154 cri.go:89] found id: ""
	I0127 13:33:21.757820  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.757829  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:21.757835  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:21.757887  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:21.792568  427154 cri.go:89] found id: ""
	I0127 13:33:21.792599  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.792609  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:21.792615  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:21.792676  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:21.826500  427154 cri.go:89] found id: ""
	I0127 13:33:21.826547  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.826561  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:21.826571  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:21.826637  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:21.864011  427154 cri.go:89] found id: ""
	I0127 13:33:21.864036  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.864045  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:21.864052  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:21.864112  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:21.901973  427154 cri.go:89] found id: ""
	I0127 13:33:21.902000  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.902011  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:21.902020  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:21.902081  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:21.939801  427154 cri.go:89] found id: ""
	I0127 13:33:21.939831  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.939841  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:21.939849  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:21.939915  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:21.978269  427154 cri.go:89] found id: ""
	I0127 13:33:21.978300  427154 logs.go:282] 0 containers: []
	W0127 13:33:21.978312  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:21.978327  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:21.978351  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:22.033968  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:22.034015  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:22.049581  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:22.049611  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:22.121088  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:22.121124  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:22.121139  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:22.202050  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:22.202093  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:24.741983  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:24.755767  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:24.755834  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:24.791550  427154 cri.go:89] found id: ""
	I0127 13:33:24.791580  427154 logs.go:282] 0 containers: []
	W0127 13:33:24.791594  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:24.791600  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:24.791666  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:24.830394  427154 cri.go:89] found id: ""
	I0127 13:33:24.830431  427154 logs.go:282] 0 containers: []
	W0127 13:33:24.830444  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:24.830452  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:24.830514  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:24.865310  427154 cri.go:89] found id: ""
	I0127 13:33:24.865344  427154 logs.go:282] 0 containers: []
	W0127 13:33:24.865354  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:24.865359  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:24.865413  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:24.901410  427154 cri.go:89] found id: ""
	I0127 13:33:24.901436  427154 logs.go:282] 0 containers: []
	W0127 13:33:24.901444  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:24.901450  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:24.901500  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:24.936499  427154 cri.go:89] found id: ""
	I0127 13:33:24.936530  427154 logs.go:282] 0 containers: []
	W0127 13:33:24.936541  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:24.936548  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:24.936619  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:24.974498  427154 cri.go:89] found id: ""
	I0127 13:33:24.974530  427154 logs.go:282] 0 containers: []
	W0127 13:33:24.974560  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:24.974568  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:24.974633  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:25.014321  427154 cri.go:89] found id: ""
	I0127 13:33:25.014352  427154 logs.go:282] 0 containers: []
	W0127 13:33:25.014361  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:25.014367  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:25.014420  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:25.050450  427154 cri.go:89] found id: ""
	I0127 13:33:25.050475  427154 logs.go:282] 0 containers: []
	W0127 13:33:25.050483  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:25.050493  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:25.050503  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:25.086843  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:25.086878  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:25.140067  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:25.140096  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:25.153077  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:25.153111  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:25.227977  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:25.228004  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:25.228023  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:27.810656  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:27.824419  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:27.824481  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:27.865000  427154 cri.go:89] found id: ""
	I0127 13:33:27.865038  427154 logs.go:282] 0 containers: []
	W0127 13:33:27.865051  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:27.865059  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:27.865121  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:27.901557  427154 cri.go:89] found id: ""
	I0127 13:33:27.901584  427154 logs.go:282] 0 containers: []
	W0127 13:33:27.901593  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:27.901599  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:27.901652  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:27.940599  427154 cri.go:89] found id: ""
	I0127 13:33:27.940631  427154 logs.go:282] 0 containers: []
	W0127 13:33:27.940643  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:27.940651  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:27.940713  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:27.977375  427154 cri.go:89] found id: ""
	I0127 13:33:27.977406  427154 logs.go:282] 0 containers: []
	W0127 13:33:27.977417  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:27.977425  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:27.977485  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:28.019781  427154 cri.go:89] found id: ""
	I0127 13:33:28.019814  427154 logs.go:282] 0 containers: []
	W0127 13:33:28.019826  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:28.019834  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:28.019898  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:28.059489  427154 cri.go:89] found id: ""
	I0127 13:33:28.059516  427154 logs.go:282] 0 containers: []
	W0127 13:33:28.059524  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:28.059535  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:28.059595  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:28.099202  427154 cri.go:89] found id: ""
	I0127 13:33:28.099227  427154 logs.go:282] 0 containers: []
	W0127 13:33:28.099238  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:28.099247  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:28.099300  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:28.138912  427154 cri.go:89] found id: ""
	I0127 13:33:28.138940  427154 logs.go:282] 0 containers: []
	W0127 13:33:28.138951  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:28.138964  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:28.138979  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:28.191990  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:28.192025  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:28.205904  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:28.205931  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:28.276816  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:28.276837  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:28.276851  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:28.356159  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:28.356192  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:30.895002  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:30.910669  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:30.910761  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:30.947396  427154 cri.go:89] found id: ""
	I0127 13:33:30.947425  427154 logs.go:282] 0 containers: []
	W0127 13:33:30.947433  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:30.947439  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:30.947501  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:30.983517  427154 cri.go:89] found id: ""
	I0127 13:33:30.983551  427154 logs.go:282] 0 containers: []
	W0127 13:33:30.983563  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:30.983570  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:30.983622  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:31.017718  427154 cri.go:89] found id: ""
	I0127 13:33:31.017752  427154 logs.go:282] 0 containers: []
	W0127 13:33:31.017763  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:31.017770  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:31.017836  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:31.054740  427154 cri.go:89] found id: ""
	I0127 13:33:31.054771  427154 logs.go:282] 0 containers: []
	W0127 13:33:31.054782  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:31.054789  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:31.054852  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:31.089159  427154 cri.go:89] found id: ""
	I0127 13:33:31.089190  427154 logs.go:282] 0 containers: []
	W0127 13:33:31.089202  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:31.089211  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:31.089276  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:31.123159  427154 cri.go:89] found id: ""
	I0127 13:33:31.123190  427154 logs.go:282] 0 containers: []
	W0127 13:33:31.123199  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:31.123206  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:31.123265  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:31.160391  427154 cri.go:89] found id: ""
	I0127 13:33:31.160419  427154 logs.go:282] 0 containers: []
	W0127 13:33:31.160427  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:31.160434  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:31.160483  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:31.195911  427154 cri.go:89] found id: ""
	I0127 13:33:31.195942  427154 logs.go:282] 0 containers: []
	W0127 13:33:31.195951  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:31.195961  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:31.195973  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:31.245099  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:31.245133  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:31.258438  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:31.258460  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:31.331956  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:31.331985  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:31.332001  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:31.410519  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:31.410574  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:33.955916  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:33.969877  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:33.969938  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:34.008680  427154 cri.go:89] found id: ""
	I0127 13:33:34.008711  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.008724  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:34.008732  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:34.008802  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:34.049626  427154 cri.go:89] found id: ""
	I0127 13:33:34.049649  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.049656  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:34.049662  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:34.049724  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:34.085056  427154 cri.go:89] found id: ""
	I0127 13:33:34.085083  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.085092  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:34.085097  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:34.085148  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:34.119795  427154 cri.go:89] found id: ""
	I0127 13:33:34.119821  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.119830  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:34.119835  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:34.119883  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:34.154936  427154 cri.go:89] found id: ""
	I0127 13:33:34.154963  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.154984  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:34.154992  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:34.155056  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:34.189440  427154 cri.go:89] found id: ""
	I0127 13:33:34.189465  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.189473  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:34.189480  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:34.189532  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:34.222857  427154 cri.go:89] found id: ""
	I0127 13:33:34.222885  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.222894  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:34.222900  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:34.222949  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:34.258223  427154 cri.go:89] found id: ""
	I0127 13:33:34.258253  427154 logs.go:282] 0 containers: []
	W0127 13:33:34.258267  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:34.258281  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:34.258297  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:34.332731  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:34.332763  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:34.371264  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:34.371307  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:34.422290  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:34.422325  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:34.437993  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:34.438026  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:34.515385  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:37.015744  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:37.030991  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:37.031056  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:37.076044  427154 cri.go:89] found id: ""
	I0127 13:33:37.076077  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.076088  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:37.076097  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:37.076158  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:37.110408  427154 cri.go:89] found id: ""
	I0127 13:33:37.110436  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.110448  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:37.110457  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:37.110521  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:37.150197  427154 cri.go:89] found id: ""
	I0127 13:33:37.150225  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.150235  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:37.150243  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:37.150319  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:37.191220  427154 cri.go:89] found id: ""
	I0127 13:33:37.191247  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.191258  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:37.191266  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:37.191317  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:37.232425  427154 cri.go:89] found id: ""
	I0127 13:33:37.232456  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.232467  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:37.232475  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:37.232531  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:37.267176  427154 cri.go:89] found id: ""
	I0127 13:33:37.267206  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.267217  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:37.267225  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:37.267305  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:37.300900  427154 cri.go:89] found id: ""
	I0127 13:33:37.300930  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.300939  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:37.300945  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:37.300998  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:37.336608  427154 cri.go:89] found id: ""
	I0127 13:33:37.336641  427154 logs.go:282] 0 containers: []
	W0127 13:33:37.336659  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:37.336671  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:37.336688  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:37.374126  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:37.374153  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:37.428022  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:37.428105  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:37.446088  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:37.446139  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:37.524962  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:37.524991  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:37.525007  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:40.109289  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:40.122737  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:40.122803  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:40.157859  427154 cri.go:89] found id: ""
	I0127 13:33:40.157885  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.157893  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:40.157898  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:40.157952  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:40.194090  427154 cri.go:89] found id: ""
	I0127 13:33:40.194123  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.194135  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:40.194143  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:40.194204  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:40.228862  427154 cri.go:89] found id: ""
	I0127 13:33:40.228892  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.228901  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:40.228906  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:40.228956  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:40.265941  427154 cri.go:89] found id: ""
	I0127 13:33:40.265975  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.265986  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:40.265994  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:40.266053  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:40.298916  427154 cri.go:89] found id: ""
	I0127 13:33:40.298955  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.298968  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:40.298976  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:40.299042  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:40.334348  427154 cri.go:89] found id: ""
	I0127 13:33:40.334376  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.334384  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:40.334393  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:40.334441  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:40.369424  427154 cri.go:89] found id: ""
	I0127 13:33:40.369452  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.369460  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:40.369466  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:40.369515  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:40.403379  427154 cri.go:89] found id: ""
	I0127 13:33:40.403413  427154 logs.go:282] 0 containers: []
	W0127 13:33:40.403425  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:40.403439  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:40.403454  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:40.455119  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:40.455156  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:40.468718  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:40.468754  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:40.535810  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:40.535839  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:40.535854  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:40.615729  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:40.615768  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:43.154977  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:43.168652  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:43.168722  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:43.207728  427154 cri.go:89] found id: ""
	I0127 13:33:43.207756  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.207768  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:43.207776  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:43.207841  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:43.244229  427154 cri.go:89] found id: ""
	I0127 13:33:43.244257  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.244268  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:43.244273  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:43.244329  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:43.279155  427154 cri.go:89] found id: ""
	I0127 13:33:43.279186  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.279197  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:43.279205  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:43.279264  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:43.315611  427154 cri.go:89] found id: ""
	I0127 13:33:43.315639  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.315647  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:43.315654  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:43.315711  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:43.349771  427154 cri.go:89] found id: ""
	I0127 13:33:43.349801  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.349811  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:43.349819  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:43.349885  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:43.388359  427154 cri.go:89] found id: ""
	I0127 13:33:43.388393  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.388405  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:43.388414  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:43.388476  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:43.426933  427154 cri.go:89] found id: ""
	I0127 13:33:43.426967  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.426979  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:43.426987  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:43.427055  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:43.464265  427154 cri.go:89] found id: ""
	I0127 13:33:43.464291  427154 logs.go:282] 0 containers: []
	W0127 13:33:43.464300  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:43.464311  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:43.464324  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:43.504738  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:43.504775  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:43.557060  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:43.557092  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:43.571780  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:43.571805  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:43.642613  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:43.642640  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:43.642656  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:46.220052  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:46.233791  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:46.233855  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:46.267921  427154 cri.go:89] found id: ""
	I0127 13:33:46.267958  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.267970  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:46.267981  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:46.268053  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:46.304081  427154 cri.go:89] found id: ""
	I0127 13:33:46.304118  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.304127  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:46.304133  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:46.304190  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:46.339656  427154 cri.go:89] found id: ""
	I0127 13:33:46.339682  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.339692  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:46.339698  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:46.339755  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:46.373473  427154 cri.go:89] found id: ""
	I0127 13:33:46.373500  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.373508  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:46.373514  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:46.373580  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:46.415143  427154 cri.go:89] found id: ""
	I0127 13:33:46.415175  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.415187  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:46.415195  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:46.415272  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:46.458853  427154 cri.go:89] found id: ""
	I0127 13:33:46.458887  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.458899  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:46.458908  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:46.458969  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:46.495493  427154 cri.go:89] found id: ""
	I0127 13:33:46.495523  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.495533  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:46.495539  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:46.495588  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:46.531603  427154 cri.go:89] found id: ""
	I0127 13:33:46.531633  427154 logs.go:282] 0 containers: []
	W0127 13:33:46.531645  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:46.531657  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:46.531671  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:46.546636  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:46.546670  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:46.624199  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:46.624223  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:46.624236  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:46.701429  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:46.701467  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:46.744113  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:46.744144  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:49.294635  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:49.309139  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:49.309212  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:49.347425  427154 cri.go:89] found id: ""
	I0127 13:33:49.347450  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.347460  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:49.347467  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:49.347521  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:49.384699  427154 cri.go:89] found id: ""
	I0127 13:33:49.384725  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.384735  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:49.384744  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:49.384790  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:49.422962  427154 cri.go:89] found id: ""
	I0127 13:33:49.422989  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.422996  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:49.423002  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:49.423064  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:49.457953  427154 cri.go:89] found id: ""
	I0127 13:33:49.457979  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.457988  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:49.457993  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:49.458047  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:49.505161  427154 cri.go:89] found id: ""
	I0127 13:33:49.505192  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.505204  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:49.505211  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:49.505271  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:49.545550  427154 cri.go:89] found id: ""
	I0127 13:33:49.545575  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.545586  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:49.545595  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:49.545646  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:49.589757  427154 cri.go:89] found id: ""
	I0127 13:33:49.589778  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.589787  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:49.589792  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:49.589830  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:49.627133  427154 cri.go:89] found id: ""
	I0127 13:33:49.627160  427154 logs.go:282] 0 containers: []
	W0127 13:33:49.627170  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:49.627181  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:49.627195  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:49.685584  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:49.685616  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:49.710754  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:49.710785  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:49.786690  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:49.786741  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:49.786758  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:49.869242  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:49.869277  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:52.413674  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:52.428526  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:52.428592  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:52.472186  427154 cri.go:89] found id: ""
	I0127 13:33:52.472209  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.472218  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:52.472229  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:52.472273  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:52.509842  427154 cri.go:89] found id: ""
	I0127 13:33:52.509865  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.509873  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:52.509878  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:52.509921  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:52.546573  427154 cri.go:89] found id: ""
	I0127 13:33:52.546593  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.546603  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:52.546615  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:52.546668  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:52.581131  427154 cri.go:89] found id: ""
	I0127 13:33:52.581160  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.581172  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:52.581179  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:52.581233  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:52.618690  427154 cri.go:89] found id: ""
	I0127 13:33:52.618717  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.618727  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:52.618735  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:52.618790  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:52.663129  427154 cri.go:89] found id: ""
	I0127 13:33:52.663158  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.663170  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:52.663178  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:52.663228  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:52.703730  427154 cri.go:89] found id: ""
	I0127 13:33:52.703754  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.703764  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:52.703769  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:52.703810  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:52.747729  427154 cri.go:89] found id: ""
	I0127 13:33:52.747755  427154 logs.go:282] 0 containers: []
	W0127 13:33:52.747766  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:52.747779  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:52.747800  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:52.811451  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:52.811487  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:52.828345  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:52.828377  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:52.908972  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:52.908994  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:52.909007  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:53.001464  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:53.001501  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:55.545422  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:55.563482  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:55.563547  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:55.599154  427154 cri.go:89] found id: ""
	I0127 13:33:55.599189  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.599203  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:55.599211  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:55.599284  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:55.636076  427154 cri.go:89] found id: ""
	I0127 13:33:55.636108  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.636123  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:55.636129  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:55.636183  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:55.678140  427154 cri.go:89] found id: ""
	I0127 13:33:55.678173  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.678185  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:55.678192  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:55.678259  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:55.716523  427154 cri.go:89] found id: ""
	I0127 13:33:55.716551  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.716559  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:55.716564  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:55.716620  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:55.755463  427154 cri.go:89] found id: ""
	I0127 13:33:55.755492  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.755503  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:55.755512  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:55.755580  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:55.796586  427154 cri.go:89] found id: ""
	I0127 13:33:55.796614  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.796624  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:55.796632  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:55.796696  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:55.837032  427154 cri.go:89] found id: ""
	I0127 13:33:55.837060  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.837068  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:55.837073  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:55.837144  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:55.875966  427154 cri.go:89] found id: ""
	I0127 13:33:55.875998  427154 logs.go:282] 0 containers: []
	W0127 13:33:55.876010  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:55.876023  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:55.876039  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:55.890314  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:55.890353  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:55.960943  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:55.960971  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:55.960985  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:56.050522  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:56.050576  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:33:56.105142  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:56.105182  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:58.673594  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:33:58.686829  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:33:58.686897  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:33:58.720952  427154 cri.go:89] found id: ""
	I0127 13:33:58.720983  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.720996  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:33:58.721004  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:33:58.721079  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:33:58.760186  427154 cri.go:89] found id: ""
	I0127 13:33:58.760220  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.760234  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:33:58.760242  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:33:58.760304  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:33:58.794924  427154 cri.go:89] found id: ""
	I0127 13:33:58.794952  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.794961  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:33:58.794966  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:33:58.795060  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:33:58.829824  427154 cri.go:89] found id: ""
	I0127 13:33:58.829856  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.829868  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:33:58.829876  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:33:58.829940  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:33:58.863372  427154 cri.go:89] found id: ""
	I0127 13:33:58.863402  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.863414  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:33:58.863422  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:33:58.863482  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:33:58.895845  427154 cri.go:89] found id: ""
	I0127 13:33:58.895871  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.895879  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:33:58.895885  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:33:58.895945  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:33:58.928793  427154 cri.go:89] found id: ""
	I0127 13:33:58.928825  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.928837  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:33:58.928845  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:33:58.928910  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:33:58.966155  427154 cri.go:89] found id: ""
	I0127 13:33:58.966188  427154 logs.go:282] 0 containers: []
	W0127 13:33:58.966198  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:33:58.966219  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:33:58.966233  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:33:59.023434  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:33:59.023470  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:33:59.038597  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:33:59.038634  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:33:59.109269  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:33:59.109294  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:33:59.109311  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:33:59.198819  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:33:59.198858  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:01.748081  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:01.761855  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:01.761921  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:01.798150  427154 cri.go:89] found id: ""
	I0127 13:34:01.798179  427154 logs.go:282] 0 containers: []
	W0127 13:34:01.798192  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:01.798199  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:01.798268  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:01.836121  427154 cri.go:89] found id: ""
	I0127 13:34:01.836153  427154 logs.go:282] 0 containers: []
	W0127 13:34:01.836162  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:01.836167  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:01.836222  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:01.883044  427154 cri.go:89] found id: ""
	I0127 13:34:01.883087  427154 logs.go:282] 0 containers: []
	W0127 13:34:01.883101  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:01.883109  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:01.883181  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:01.927307  427154 cri.go:89] found id: ""
	I0127 13:34:01.927347  427154 logs.go:282] 0 containers: []
	W0127 13:34:01.927359  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:01.927367  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:01.927437  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:01.967578  427154 cri.go:89] found id: ""
	I0127 13:34:01.967616  427154 logs.go:282] 0 containers: []
	W0127 13:34:01.967628  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:01.967637  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:01.967715  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:02.014480  427154 cri.go:89] found id: ""
	I0127 13:34:02.014530  427154 logs.go:282] 0 containers: []
	W0127 13:34:02.014560  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:02.014569  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:02.014641  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:02.051521  427154 cri.go:89] found id: ""
	I0127 13:34:02.051554  427154 logs.go:282] 0 containers: []
	W0127 13:34:02.051564  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:02.051571  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:02.051633  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:02.085631  427154 cri.go:89] found id: ""
	I0127 13:34:02.085663  427154 logs.go:282] 0 containers: []
	W0127 13:34:02.085683  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:02.085696  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:02.085713  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:02.139357  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:02.139390  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:02.152758  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:02.152789  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:02.217523  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:02.217546  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:02.217566  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:02.303837  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:02.303876  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:04.847774  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:04.866163  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:04.866228  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:04.903194  427154 cri.go:89] found id: ""
	I0127 13:34:04.903230  427154 logs.go:282] 0 containers: []
	W0127 13:34:04.903242  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:04.903249  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:04.903310  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:04.941774  427154 cri.go:89] found id: ""
	I0127 13:34:04.941812  427154 logs.go:282] 0 containers: []
	W0127 13:34:04.941823  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:04.941831  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:04.941900  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:04.982445  427154 cri.go:89] found id: ""
	I0127 13:34:04.982475  427154 logs.go:282] 0 containers: []
	W0127 13:34:04.982486  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:04.982495  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:04.982593  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:05.025319  427154 cri.go:89] found id: ""
	I0127 13:34:05.025348  427154 logs.go:282] 0 containers: []
	W0127 13:34:05.025360  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:05.025368  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:05.025427  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:05.064543  427154 cri.go:89] found id: ""
	I0127 13:34:05.064569  427154 logs.go:282] 0 containers: []
	W0127 13:34:05.064577  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:05.064582  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:05.064634  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:05.105303  427154 cri.go:89] found id: ""
	I0127 13:34:05.105334  427154 logs.go:282] 0 containers: []
	W0127 13:34:05.105349  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:05.105356  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:05.105407  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:05.148523  427154 cri.go:89] found id: ""
	I0127 13:34:05.148558  427154 logs.go:282] 0 containers: []
	W0127 13:34:05.148570  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:05.148578  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:05.148658  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:05.190809  427154 cri.go:89] found id: ""
	I0127 13:34:05.190844  427154 logs.go:282] 0 containers: []
	W0127 13:34:05.190855  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:05.190870  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:05.190885  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:05.259698  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:05.259744  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:05.274965  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:05.275016  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:05.355248  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:05.355280  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:05.355301  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:05.441128  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:05.441172  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:07.991447  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:08.004334  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:08.004407  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:08.043396  427154 cri.go:89] found id: ""
	I0127 13:34:08.043425  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.043436  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:08.043445  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:08.043529  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:08.077350  427154 cri.go:89] found id: ""
	I0127 13:34:08.077382  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.077400  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:08.077409  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:08.077473  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:08.111568  427154 cri.go:89] found id: ""
	I0127 13:34:08.111595  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.111604  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:08.111610  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:08.111657  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:08.150163  427154 cri.go:89] found id: ""
	I0127 13:34:08.150190  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.150198  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:08.150205  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:08.150259  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:08.185671  427154 cri.go:89] found id: ""
	I0127 13:34:08.185703  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.185716  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:08.185725  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:08.185790  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:08.221978  427154 cri.go:89] found id: ""
	I0127 13:34:08.222005  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.222014  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:08.222020  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:08.222071  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:08.254608  427154 cri.go:89] found id: ""
	I0127 13:34:08.254635  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.254643  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:08.254649  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:08.254701  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:08.286956  427154 cri.go:89] found id: ""
	I0127 13:34:08.286988  427154 logs.go:282] 0 containers: []
	W0127 13:34:08.287000  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:08.287011  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:08.287024  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:08.338807  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:08.338843  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:08.352946  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:08.352977  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:08.425860  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:08.425897  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:08.425913  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:08.533375  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:08.533411  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:11.086501  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:11.100151  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:11.100215  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:11.136626  427154 cri.go:89] found id: ""
	I0127 13:34:11.136655  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.136663  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:11.136669  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:11.136719  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:11.171761  427154 cri.go:89] found id: ""
	I0127 13:34:11.171790  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.171799  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:11.171804  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:11.171856  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:11.206879  427154 cri.go:89] found id: ""
	I0127 13:34:11.206915  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.206930  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:11.206938  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:11.207002  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:11.241441  427154 cri.go:89] found id: ""
	I0127 13:34:11.241471  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.241507  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:11.241518  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:11.241584  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:11.274243  427154 cri.go:89] found id: ""
	I0127 13:34:11.274278  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.274293  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:11.274301  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:11.274373  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:11.308126  427154 cri.go:89] found id: ""
	I0127 13:34:11.308161  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.308173  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:11.308180  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:11.308246  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:11.349084  427154 cri.go:89] found id: ""
	I0127 13:34:11.349118  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.349130  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:11.349137  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:11.349193  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:11.387432  427154 cri.go:89] found id: ""
	I0127 13:34:11.387466  427154 logs.go:282] 0 containers: []
	W0127 13:34:11.387479  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:11.387493  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:11.387517  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:11.437506  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:11.437537  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:11.451040  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:11.451067  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:11.531362  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:11.531383  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:11.531395  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:11.611808  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:11.611855  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:14.154642  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:14.168115  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:14.168206  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:14.202473  427154 cri.go:89] found id: ""
	I0127 13:34:14.202499  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.202507  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:14.202513  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:14.202585  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:14.235619  427154 cri.go:89] found id: ""
	I0127 13:34:14.235652  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.235661  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:14.235667  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:14.235727  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:14.268746  427154 cri.go:89] found id: ""
	I0127 13:34:14.268777  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.268786  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:14.268793  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:14.268850  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:14.299068  427154 cri.go:89] found id: ""
	I0127 13:34:14.299103  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.299116  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:14.299124  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:14.299189  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:14.336531  427154 cri.go:89] found id: ""
	I0127 13:34:14.336563  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.336573  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:14.336580  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:14.336643  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:14.372100  427154 cri.go:89] found id: ""
	I0127 13:34:14.372136  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.372148  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:14.372156  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:14.372215  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:14.405383  427154 cri.go:89] found id: ""
	I0127 13:34:14.405413  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.405424  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:14.405432  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:14.405501  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:14.439462  427154 cri.go:89] found id: ""
	I0127 13:34:14.439493  427154 logs.go:282] 0 containers: []
	W0127 13:34:14.439502  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:14.439513  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:14.439527  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:14.491123  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:14.491156  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:14.504582  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:14.504607  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:14.579502  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:14.579526  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:14.579544  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:14.658729  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:14.658768  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:17.198084  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:17.210892  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:17.210969  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:17.244268  427154 cri.go:89] found id: ""
	I0127 13:34:17.244304  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.244316  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:17.244331  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:17.244383  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:17.276162  427154 cri.go:89] found id: ""
	I0127 13:34:17.276194  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.276205  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:17.276214  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:17.276279  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:17.309236  427154 cri.go:89] found id: ""
	I0127 13:34:17.309268  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.309279  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:17.309287  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:17.309347  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:17.343799  427154 cri.go:89] found id: ""
	I0127 13:34:17.343826  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.343835  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:17.343840  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:17.343894  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:17.380032  427154 cri.go:89] found id: ""
	I0127 13:34:17.380060  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.380069  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:17.380075  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:17.380126  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:17.413171  427154 cri.go:89] found id: ""
	I0127 13:34:17.413199  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.413207  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:17.413212  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:17.413260  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:17.452030  427154 cri.go:89] found id: ""
	I0127 13:34:17.452061  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.452072  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:17.452079  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:17.452131  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:17.485170  427154 cri.go:89] found id: ""
	I0127 13:34:17.485200  427154 logs.go:282] 0 containers: []
	W0127 13:34:17.485211  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:17.485225  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:17.485240  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:17.565247  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:17.565284  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:17.602742  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:17.602777  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:17.653884  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:17.653915  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:17.667442  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:17.667474  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:17.736022  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:20.237220  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:20.253875  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:20.253956  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:20.293887  427154 cri.go:89] found id: ""
	I0127 13:34:20.293911  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.293920  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:20.293926  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:20.293977  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:20.339017  427154 cri.go:89] found id: ""
	I0127 13:34:20.339042  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.339050  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:20.339055  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:20.339109  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:20.377707  427154 cri.go:89] found id: ""
	I0127 13:34:20.377775  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.377792  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:20.377801  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:20.377856  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:20.417359  427154 cri.go:89] found id: ""
	I0127 13:34:20.417390  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.417400  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:20.417408  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:20.417466  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:20.453637  427154 cri.go:89] found id: ""
	I0127 13:34:20.453669  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.453678  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:20.453683  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:20.453738  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:20.491388  427154 cri.go:89] found id: ""
	I0127 13:34:20.491422  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.491433  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:20.491441  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:20.491493  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:20.526420  427154 cri.go:89] found id: ""
	I0127 13:34:20.526450  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.526461  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:20.526469  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:20.526527  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:20.569346  427154 cri.go:89] found id: ""
	I0127 13:34:20.569378  427154 logs.go:282] 0 containers: []
	W0127 13:34:20.569388  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:20.569401  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:20.569419  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:20.621898  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:20.621931  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:20.638485  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:20.638515  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:20.723119  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:20.723145  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:20.723163  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:20.816902  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:20.816942  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:23.374257  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:23.388217  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:23.388299  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:23.427775  427154 cri.go:89] found id: ""
	I0127 13:34:23.427815  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.427852  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:23.427862  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:23.427935  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:23.466750  427154 cri.go:89] found id: ""
	I0127 13:34:23.466776  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.466785  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:23.466791  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:23.466844  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:23.500704  427154 cri.go:89] found id: ""
	I0127 13:34:23.500741  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.500753  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:23.500761  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:23.500841  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:23.543322  427154 cri.go:89] found id: ""
	I0127 13:34:23.543348  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.543356  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:23.543362  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:23.543451  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:23.591763  427154 cri.go:89] found id: ""
	I0127 13:34:23.591797  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.591810  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:23.591818  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:23.591906  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:23.632151  427154 cri.go:89] found id: ""
	I0127 13:34:23.632193  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.632205  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:23.632214  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:23.632285  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:23.668919  427154 cri.go:89] found id: ""
	I0127 13:34:23.668950  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.668961  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:23.668970  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:23.669033  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:23.710997  427154 cri.go:89] found id: ""
	I0127 13:34:23.711036  427154 logs.go:282] 0 containers: []
	W0127 13:34:23.711048  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:23.711062  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:23.711080  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:23.724974  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:23.725005  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:23.815178  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:23.815209  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:23.815226  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:23.909433  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:23.909487  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:23.959899  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:23.959932  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:26.537803  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:26.550567  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:26.550646  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:26.590444  427154 cri.go:89] found id: ""
	I0127 13:34:26.590481  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.590493  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:26.590501  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:26.590577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:26.629496  427154 cri.go:89] found id: ""
	I0127 13:34:26.629528  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.629539  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:26.629547  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:26.629625  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:26.666708  427154 cri.go:89] found id: ""
	I0127 13:34:26.666745  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.666756  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:26.666765  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:26.666829  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:26.706147  427154 cri.go:89] found id: ""
	I0127 13:34:26.706181  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.706193  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:26.706201  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:26.706273  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:26.740428  427154 cri.go:89] found id: ""
	I0127 13:34:26.740458  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.740467  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:26.740473  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:26.740522  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:26.773660  427154 cri.go:89] found id: ""
	I0127 13:34:26.773695  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.773709  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:26.773717  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:26.773786  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:26.812259  427154 cri.go:89] found id: ""
	I0127 13:34:26.812288  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.812297  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:26.812303  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:26.812364  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:26.847865  427154 cri.go:89] found id: ""
	I0127 13:34:26.847896  427154 logs.go:282] 0 containers: []
	W0127 13:34:26.847906  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:26.847916  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:26.847935  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:26.903219  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:26.903251  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:26.916987  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:26.917012  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:26.987116  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:26.987142  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:26.987159  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:27.068893  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:27.068942  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:29.614680  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:29.632836  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:29.632915  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:29.673406  427154 cri.go:89] found id: ""
	I0127 13:34:29.673446  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.673458  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:29.673467  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:29.673541  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:29.715873  427154 cri.go:89] found id: ""
	I0127 13:34:29.715909  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.715922  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:29.715929  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:29.715998  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:29.758736  427154 cri.go:89] found id: ""
	I0127 13:34:29.758769  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.758782  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:29.758789  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:29.758851  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:29.803706  427154 cri.go:89] found id: ""
	I0127 13:34:29.803744  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.803756  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:29.803764  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:29.803831  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:29.842046  427154 cri.go:89] found id: ""
	I0127 13:34:29.842080  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.842092  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:29.842100  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:29.842163  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:29.880462  427154 cri.go:89] found id: ""
	I0127 13:34:29.880488  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.880496  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:29.880502  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:29.880558  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:29.914707  427154 cri.go:89] found id: ""
	I0127 13:34:29.914737  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.914746  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:29.914752  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:29.914818  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:29.951572  427154 cri.go:89] found id: ""
	I0127 13:34:29.951604  427154 logs.go:282] 0 containers: []
	W0127 13:34:29.951616  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:29.951630  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:29.951645  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:30.026917  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:30.026942  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:30.026958  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:30.105800  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:30.105837  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:30.146405  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:30.146438  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:30.209974  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:30.210017  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:32.746649  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:32.760657  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:32.760729  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:32.796152  427154 cri.go:89] found id: ""
	I0127 13:34:32.796184  427154 logs.go:282] 0 containers: []
	W0127 13:34:32.796196  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:32.796204  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:32.796272  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:32.831763  427154 cri.go:89] found id: ""
	I0127 13:34:32.831789  427154 logs.go:282] 0 containers: []
	W0127 13:34:32.831797  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:32.831802  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:32.831863  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:32.867657  427154 cri.go:89] found id: ""
	I0127 13:34:32.867685  427154 logs.go:282] 0 containers: []
	W0127 13:34:32.867693  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:32.867698  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:32.867758  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:32.908001  427154 cri.go:89] found id: ""
	I0127 13:34:32.908035  427154 logs.go:282] 0 containers: []
	W0127 13:34:32.908049  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:32.908058  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:32.908125  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:32.953492  427154 cri.go:89] found id: ""
	I0127 13:34:32.953529  427154 logs.go:282] 0 containers: []
	W0127 13:34:32.953542  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:32.953550  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:32.953609  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:32.992210  427154 cri.go:89] found id: ""
	I0127 13:34:32.992242  427154 logs.go:282] 0 containers: []
	W0127 13:34:32.992254  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:32.992261  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:32.992347  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:33.033354  427154 cri.go:89] found id: ""
	I0127 13:34:33.033392  427154 logs.go:282] 0 containers: []
	W0127 13:34:33.033405  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:33.033412  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:33.033484  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:33.070125  427154 cri.go:89] found id: ""
	I0127 13:34:33.070163  427154 logs.go:282] 0 containers: []
	W0127 13:34:33.070176  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:33.070188  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:33.070204  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:33.145106  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:33.145145  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:33.162161  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:33.162195  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:33.235831  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:33.235861  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:33.235881  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:33.322741  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:33.322788  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:35.871339  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:35.885813  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:35.885868  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:35.926502  427154 cri.go:89] found id: ""
	I0127 13:34:35.926549  427154 logs.go:282] 0 containers: []
	W0127 13:34:35.926581  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:35.926592  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:35.926674  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:35.961958  427154 cri.go:89] found id: ""
	I0127 13:34:35.961991  427154 logs.go:282] 0 containers: []
	W0127 13:34:35.962003  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:35.962010  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:35.962071  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:35.996937  427154 cri.go:89] found id: ""
	I0127 13:34:35.996966  427154 logs.go:282] 0 containers: []
	W0127 13:34:35.996975  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:35.996981  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:35.997043  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:36.030799  427154 cri.go:89] found id: ""
	I0127 13:34:36.030825  427154 logs.go:282] 0 containers: []
	W0127 13:34:36.030834  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:36.030840  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:36.030893  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:36.071992  427154 cri.go:89] found id: ""
	I0127 13:34:36.072017  427154 logs.go:282] 0 containers: []
	W0127 13:34:36.072027  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:36.072035  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:36.072092  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:36.109470  427154 cri.go:89] found id: ""
	I0127 13:34:36.109502  427154 logs.go:282] 0 containers: []
	W0127 13:34:36.109511  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:36.109518  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:36.109580  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:36.152392  427154 cri.go:89] found id: ""
	I0127 13:34:36.152424  427154 logs.go:282] 0 containers: []
	W0127 13:34:36.152435  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:36.152446  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:36.152499  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:36.187377  427154 cri.go:89] found id: ""
	I0127 13:34:36.187407  427154 logs.go:282] 0 containers: []
	W0127 13:34:36.187414  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:36.187424  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:36.187436  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:36.253107  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:36.253142  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:36.267517  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:36.267547  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:36.339260  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:36.339288  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:36.339303  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:36.414683  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:36.414720  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:38.957419  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:38.973215  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:38.973292  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:39.016628  427154 cri.go:89] found id: ""
	I0127 13:34:39.016653  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.016661  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:39.016668  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:39.016734  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:39.054624  427154 cri.go:89] found id: ""
	I0127 13:34:39.054651  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.054664  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:39.054671  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:39.054723  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:39.092210  427154 cri.go:89] found id: ""
	I0127 13:34:39.092241  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.092253  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:39.092261  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:39.092323  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:39.135686  427154 cri.go:89] found id: ""
	I0127 13:34:39.135720  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.135733  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:39.135741  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:39.135808  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:39.176526  427154 cri.go:89] found id: ""
	I0127 13:34:39.176557  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.176568  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:39.176575  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:39.176643  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:39.220668  427154 cri.go:89] found id: ""
	I0127 13:34:39.220699  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.220710  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:39.220719  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:39.220788  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:39.261041  427154 cri.go:89] found id: ""
	I0127 13:34:39.261080  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.261092  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:39.261099  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:39.261162  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:39.299046  427154 cri.go:89] found id: ""
	I0127 13:34:39.299079  427154 logs.go:282] 0 containers: []
	W0127 13:34:39.299092  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:39.299107  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:39.299123  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:39.376873  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:39.376910  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:39.419456  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:39.419485  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:39.479421  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:39.479461  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:39.492813  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:39.492844  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:39.570653  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:42.071694  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:42.089880  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:42.089960  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:42.128719  427154 cri.go:89] found id: ""
	I0127 13:34:42.128748  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.128760  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:42.128768  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:42.128833  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:42.169409  427154 cri.go:89] found id: ""
	I0127 13:34:42.169438  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.169454  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:42.169459  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:42.169511  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:42.207100  427154 cri.go:89] found id: ""
	I0127 13:34:42.207136  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.207148  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:42.207155  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:42.207225  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:42.244593  427154 cri.go:89] found id: ""
	I0127 13:34:42.244630  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.244642  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:42.244650  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:42.244704  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:42.278310  427154 cri.go:89] found id: ""
	I0127 13:34:42.278351  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.278361  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:42.278367  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:42.278420  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:42.310742  427154 cri.go:89] found id: ""
	I0127 13:34:42.310790  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.310801  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:42.310807  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:42.310875  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:42.357629  427154 cri.go:89] found id: ""
	I0127 13:34:42.357668  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.357680  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:42.357687  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:42.357756  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:42.400321  427154 cri.go:89] found id: ""
	I0127 13:34:42.400359  427154 logs.go:282] 0 containers: []
	W0127 13:34:42.400371  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:42.400384  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:42.400401  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:42.457882  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:42.457921  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:42.473246  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:42.473280  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:42.551885  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:42.551917  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:42.551933  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:42.653604  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:42.653658  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:45.217379  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:45.232200  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:45.232286  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:45.267004  427154 cri.go:89] found id: ""
	I0127 13:34:45.267042  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.267055  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:45.267064  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:45.267143  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:45.300373  427154 cri.go:89] found id: ""
	I0127 13:34:45.300407  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.300419  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:45.300428  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:45.300495  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:45.338554  427154 cri.go:89] found id: ""
	I0127 13:34:45.338583  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.338595  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:45.338602  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:45.338666  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:45.380664  427154 cri.go:89] found id: ""
	I0127 13:34:45.380690  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.380697  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:45.380703  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:45.380756  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:45.415258  427154 cri.go:89] found id: ""
	I0127 13:34:45.415286  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.415294  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:45.415300  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:45.415364  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:45.449039  427154 cri.go:89] found id: ""
	I0127 13:34:45.449067  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.449075  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:45.449090  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:45.449142  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:45.483188  427154 cri.go:89] found id: ""
	I0127 13:34:45.483224  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.483237  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:45.483244  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:45.483317  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:45.515546  427154 cri.go:89] found id: ""
	I0127 13:34:45.515581  427154 logs.go:282] 0 containers: []
	W0127 13:34:45.515590  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:45.515602  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:45.515614  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:45.568239  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:45.568271  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:45.581849  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:45.581881  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:45.660910  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:45.660944  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:45.660960  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:45.758459  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:45.758502  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:48.302747  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:48.321834  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:48.321899  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:48.370678  427154 cri.go:89] found id: ""
	I0127 13:34:48.370716  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.370732  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:48.370741  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:48.370813  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:48.430514  427154 cri.go:89] found id: ""
	I0127 13:34:48.430655  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.430683  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:48.430702  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:48.430826  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:48.477908  427154 cri.go:89] found id: ""
	I0127 13:34:48.477941  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.477954  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:48.477962  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:48.478036  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:48.532193  427154 cri.go:89] found id: ""
	I0127 13:34:48.532230  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.532242  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:48.532250  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:48.532316  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:48.580627  427154 cri.go:89] found id: ""
	I0127 13:34:48.580658  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.580667  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:48.580673  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:48.580744  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:48.620393  427154 cri.go:89] found id: ""
	I0127 13:34:48.620428  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.620441  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:48.620449  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:48.620518  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:48.662032  427154 cri.go:89] found id: ""
	I0127 13:34:48.662071  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.662079  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:48.662097  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:48.662164  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:48.699662  427154 cri.go:89] found id: ""
	I0127 13:34:48.699697  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.699709  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:48.699723  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:48.699745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:48.752100  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:48.752134  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:48.768121  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:48.768167  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:48.838690  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:48.838718  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:48.838734  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:48.928433  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:48.928471  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:51.475609  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:51.489500  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:51.489579  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:51.536219  427154 cri.go:89] found id: ""
	I0127 13:34:51.536250  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.536262  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:51.536270  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:51.536334  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:51.577494  427154 cri.go:89] found id: ""
	I0127 13:34:51.577522  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.577536  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:51.577543  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:51.577606  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:51.614430  427154 cri.go:89] found id: ""
	I0127 13:34:51.614463  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.614476  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:51.614484  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:51.614602  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:51.666530  427154 cri.go:89] found id: ""
	I0127 13:34:51.666582  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.666591  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:51.666597  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:51.666653  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:51.705538  427154 cri.go:89] found id: ""
	I0127 13:34:51.705567  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.705579  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:51.705587  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:51.705645  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:51.743604  427154 cri.go:89] found id: ""
	I0127 13:34:51.743638  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.743650  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:51.743658  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:51.743721  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:51.778029  427154 cri.go:89] found id: ""
	I0127 13:34:51.778058  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.778070  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:51.778078  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:51.778148  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:51.819260  427154 cri.go:89] found id: ""
	I0127 13:34:51.819294  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.819307  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:51.819321  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:51.819338  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:51.887511  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:51.887552  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:51.904227  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:51.904261  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:51.980655  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:51.980684  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:51.980699  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:52.085922  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:52.085973  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:54.642029  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:54.655922  427154 kubeadm.go:597] duration metric: took 4m4.240008337s to restartPrimaryControlPlane
	W0127 13:34:54.656192  427154 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:34:54.656244  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:34:59.517968  427154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.861694115s)
	I0127 13:34:59.518062  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:34:59.536180  427154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:34:59.547986  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:34:59.561566  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:34:59.561591  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:34:59.561645  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:34:59.574802  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:34:59.574872  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:34:59.588185  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:34:59.598292  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:34:59.598356  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:34:59.608921  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.621764  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:34:59.621825  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.635526  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:34:59.646582  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:34:59.646644  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:34:59.657975  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:34:59.745239  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:34:59.745337  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:59.946676  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:59.946890  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:59.947050  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:35:00.183580  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:35:00.184950  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:35:00.185049  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:35:00.185140  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:35:00.185334  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:35:00.185435  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:35:00.186094  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:35:00.186301  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:35:00.187022  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:35:00.187455  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:35:00.187928  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:35:00.188334  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:35:00.188531  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:35:00.188608  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:35:00.344156  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:35:00.836083  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:35:00.964664  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:35:01.072929  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:35:01.092946  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:35:01.097538  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:35:01.097961  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:35:01.292953  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:35:01.294375  427154 out.go:235]   - Booting up control plane ...
	I0127 13:35:01.294569  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:35:01.306014  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:35:01.309847  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:35:01.310062  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:35:01.312436  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:35:41.313958  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:35:41.315406  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:41.315596  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:46.316260  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:46.316520  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:56.316974  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:56.317208  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:16.318338  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:16.318524  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320677  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:56.320945  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320963  427154 kubeadm.go:310] 
	I0127 13:36:56.321020  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:36:56.321085  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:36:56.321099  427154 kubeadm.go:310] 
	I0127 13:36:56.321165  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:36:56.321228  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:36:56.321357  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:36:56.321378  427154 kubeadm.go:310] 
	I0127 13:36:56.321499  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:36:56.321545  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:36:56.321574  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:36:56.321580  427154 kubeadm.go:310] 
	I0127 13:36:56.321720  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:36:56.321827  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:36:56.321840  427154 kubeadm.go:310] 
	I0127 13:36:56.321935  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:36:56.322018  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:36:56.322099  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:36:56.322162  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:36:56.322169  427154 kubeadm.go:310] 
	I0127 13:36:56.323303  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:36:56.323399  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:36:56.323478  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 13:36:56.323617  427154 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:36:56.323664  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:36:56.804696  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:36:56.819996  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:36:56.830103  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:36:56.830120  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:36:56.830161  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:36:56.839297  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:36:56.839351  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:36:56.848603  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:36:56.857433  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:36:56.857500  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:36:56.867735  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.876669  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:36:56.876721  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.885857  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:36:56.894734  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:36:56.894788  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:36:56.904112  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:36:56.975515  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:36:56.975724  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:36:57.110596  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:36:57.110748  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:36:57.110890  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:36:57.287182  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:36:57.289124  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:36:57.289247  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:36:57.289310  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:36:57.289405  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:36:57.289504  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:36:57.289595  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:36:57.289665  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:36:57.289780  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:36:57.290345  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:36:57.291337  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:36:57.292274  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:36:57.292554  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:36:57.292622  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:36:57.586245  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:36:57.746278  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:36:57.846816  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:36:57.985775  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:36:58.007369  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:36:58.008417  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:36:58.008485  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:36:58.134182  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:36:58.136066  427154 out.go:235]   - Booting up control plane ...
	I0127 13:36:58.136194  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:36:58.148785  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:36:58.148921  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:36:58.149274  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:36:58.153395  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:37:38.155987  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:37:38.156613  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:38.156831  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:43.157356  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:43.157567  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:53.158341  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:53.158675  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:13.158624  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:13.158876  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157583  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:53.157824  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157839  427154 kubeadm.go:310] 
	I0127 13:38:53.157896  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:38:53.157954  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:38:53.157966  427154 kubeadm.go:310] 
	I0127 13:38:53.158014  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:38:53.158064  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:38:53.158222  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:38:53.158234  427154 kubeadm.go:310] 
	I0127 13:38:53.158404  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:38:53.158453  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:38:53.158483  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:38:53.158491  427154 kubeadm.go:310] 
	I0127 13:38:53.158624  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:38:53.158726  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:38:53.158741  427154 kubeadm.go:310] 
	I0127 13:38:53.158894  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:38:53.159040  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:38:53.159165  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:38:53.159264  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:38:53.159275  427154 kubeadm.go:310] 
	I0127 13:38:53.159902  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:38:53.160042  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:38:53.160128  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:38:53.160213  427154 kubeadm.go:394] duration metric: took 8m2.798471593s to StartCluster
	I0127 13:38:53.160286  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:38:53.160377  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:38:53.205471  427154 cri.go:89] found id: ""
	I0127 13:38:53.205496  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.205504  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:38:53.205510  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:38:53.205577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:38:53.240500  427154 cri.go:89] found id: ""
	I0127 13:38:53.240532  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.240543  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:38:53.240564  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:38:53.240625  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:38:53.282232  427154 cri.go:89] found id: ""
	I0127 13:38:53.282267  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.282279  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:38:53.282287  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:38:53.282354  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:38:53.315589  427154 cri.go:89] found id: ""
	I0127 13:38:53.315643  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.315659  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:38:53.315666  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:38:53.315735  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:38:53.349806  427154 cri.go:89] found id: ""
	I0127 13:38:53.349836  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.349844  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:38:53.349850  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:38:53.349906  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:38:53.382052  427154 cri.go:89] found id: ""
	I0127 13:38:53.382084  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.382095  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:38:53.382103  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:38:53.382176  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:38:53.416057  427154 cri.go:89] found id: ""
	I0127 13:38:53.416091  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.416103  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:38:53.416120  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:38:53.416185  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:38:53.449983  427154 cri.go:89] found id: ""
	I0127 13:38:53.450017  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.450029  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:38:53.450046  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:38:53.450064  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:38:53.498208  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:38:53.498242  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:38:53.552441  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:38:53.552472  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:38:53.567811  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:38:53.567841  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:38:53.646625  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:38:53.646651  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:38:53.646667  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 13:38:53.748675  427154 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:38:53.748747  427154 out.go:270] * 
	* 
	W0127 13:38:53.748849  427154 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.748865  427154 out.go:270] * 
	* 
	W0127 13:38:53.749670  427154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:38:53.753264  427154 out.go:201] 
	W0127 13:38:53.754315  427154 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.754372  427154 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:38:53.754397  427154 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:38:53.755624  427154 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-838260 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (242.191983ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-838260 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-563155                  | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-563155                                   | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-441438       | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-174381                 | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-174381                                  | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-838260        | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-838260             | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-441438                           | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-639843             | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-639843                  | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-639843 image list                           | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:34:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:34:50.343590  429070 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:34:50.343706  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343717  429070 out.go:358] Setting ErrFile to fd 2...
	I0127 13:34:50.343725  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343905  429070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:34:50.344540  429070 out.go:352] Setting JSON to false
	I0127 13:34:50.345553  429070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22630,"bootTime":1737962260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:34:50.345705  429070 start.go:139] virtualization: kvm guest
	I0127 13:34:50.348432  429070 out.go:177] * [newest-cni-639843] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:34:50.349607  429070 notify.go:220] Checking for updates...
	I0127 13:34:50.349639  429070 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:34:50.350877  429070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:34:50.352137  429070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:50.353523  429070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:34:50.354936  429070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:34:50.356253  429070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:34:50.358120  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:50.358577  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.358648  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.375344  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 13:34:50.375770  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.376385  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.376429  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.376809  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.377061  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.377398  429070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:34:50.377833  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.377889  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.393490  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0127 13:34:50.393954  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.394574  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.394602  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.394931  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.395175  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.432045  429070 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:34:50.433260  429070 start.go:297] selected driver: kvm2
	I0127 13:34:50.433295  429070 start.go:901] validating driver "kvm2" against &{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.433450  429070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:34:50.434521  429070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.434662  429070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:34:50.455080  429070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:34:50.455695  429070 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:34:50.455755  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:34:50.455835  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:50.455908  429070 start.go:340] cluster config:
	{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.456092  429070 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.457706  429070 out.go:177] * Starting "newest-cni-639843" primary control-plane node in "newest-cni-639843" cluster
	I0127 13:34:50.458857  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:34:50.458907  429070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:34:50.458924  429070 cache.go:56] Caching tarball of preloaded images
	I0127 13:34:50.459033  429070 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:34:50.459049  429070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:34:50.459193  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:34:50.459403  429070 start.go:360] acquireMachinesLock for newest-cni-639843: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:34:50.459457  429070 start.go:364] duration metric: took 33.893µs to acquireMachinesLock for "newest-cni-639843"
	I0127 13:34:50.459478  429070 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:34:50.459488  429070 fix.go:54] fixHost starting: 
	I0127 13:34:50.459761  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.459807  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.475245  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0127 13:34:50.475743  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.476455  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.476504  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.476932  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.477227  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.477420  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:34:50.479725  429070 fix.go:112] recreateIfNeeded on newest-cni-639843: state=Stopped err=<nil>
	I0127 13:34:50.479768  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	W0127 13:34:50.479933  429070 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:34:50.481457  429070 out.go:177] * Restarting existing kvm2 VM for "newest-cni-639843" ...
	I0127 13:34:48.302747  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:48.321834  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:48.321899  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:48.370678  427154 cri.go:89] found id: ""
	I0127 13:34:48.370716  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.370732  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:48.370741  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:48.370813  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:48.430514  427154 cri.go:89] found id: ""
	I0127 13:34:48.430655  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.430683  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:48.430702  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:48.430826  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:48.477908  427154 cri.go:89] found id: ""
	I0127 13:34:48.477941  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.477954  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:48.477962  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:48.478036  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:48.532193  427154 cri.go:89] found id: ""
	I0127 13:34:48.532230  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.532242  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:48.532250  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:48.532316  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:48.580627  427154 cri.go:89] found id: ""
	I0127 13:34:48.580658  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.580667  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:48.580673  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:48.580744  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:48.620393  427154 cri.go:89] found id: ""
	I0127 13:34:48.620428  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.620441  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:48.620449  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:48.620518  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:48.662032  427154 cri.go:89] found id: ""
	I0127 13:34:48.662071  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.662079  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:48.662097  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:48.662164  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:48.699662  427154 cri.go:89] found id: ""
	I0127 13:34:48.699697  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.699709  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:48.699723  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:48.699745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:48.752100  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:48.752134  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:48.768121  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:48.768167  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:48.838690  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:48.838718  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:48.838734  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:48.928433  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:48.928471  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:52.576263  426243 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:34:52.576356  426243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:52.576423  426243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:52.576582  426243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:52.576704  426243 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:34:52.576783  426243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:52.578299  426243 out.go:235]   - Generating certificates and keys ...
	I0127 13:34:52.578380  426243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:34:52.578439  426243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:34:52.578509  426243 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:34:52.578594  426243 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:34:52.578701  426243 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:34:52.578757  426243 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:34:52.578818  426243 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:34:52.578870  426243 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:34:52.578962  426243 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:34:52.579063  426243 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:34:52.579111  426243 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:34:52.579164  426243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:34:52.579227  426243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:34:52.579282  426243 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:34:52.579333  426243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:34:52.579387  426243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:34:52.579449  426243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:34:52.579519  426243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:34:52.579604  426243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:34:52.581730  426243 out.go:235]   - Booting up control plane ...
	I0127 13:34:52.581854  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:34:52.581961  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:34:52.582058  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:34:52.582184  426243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:34:52.582253  426243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:34:52.582290  426243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:34:52.582417  426243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:34:52.582554  426243 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:34:52.582651  426243 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002999225s
	I0127 13:34:52.582795  426243 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:34:52.582903  426243 kubeadm.go:310] [api-check] The API server is healthy after 5.501149453s
	I0127 13:34:52.583076  426243 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:34:52.583258  426243 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:34:52.583323  426243 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:34:52.583591  426243 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-174381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:34:52.583679  426243 kubeadm.go:310] [bootstrap-token] Using token: 5hn0ox.etnk5twofkqgha4f
	I0127 13:34:52.584876  426243 out.go:235]   - Configuring RBAC rules ...
	I0127 13:34:52.585016  426243 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:34:52.585138  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:34:52.585329  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:34:52.585515  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:34:52.585645  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:34:52.585730  426243 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:34:52.585829  426243 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:34:52.585867  426243 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:34:52.585911  426243 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:34:52.585917  426243 kubeadm.go:310] 
	I0127 13:34:52.585967  426243 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:34:52.585973  426243 kubeadm.go:310] 
	I0127 13:34:52.586066  426243 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:34:52.586082  426243 kubeadm.go:310] 
	I0127 13:34:52.586138  426243 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:34:52.586214  426243 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:34:52.586295  426243 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:34:52.586319  426243 kubeadm.go:310] 
	I0127 13:34:52.586416  426243 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:34:52.586463  426243 kubeadm.go:310] 
	I0127 13:34:52.586522  426243 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:34:52.586532  426243 kubeadm.go:310] 
	I0127 13:34:52.586628  426243 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:34:52.586712  426243 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:34:52.586770  426243 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:34:52.586777  426243 kubeadm.go:310] 
	I0127 13:34:52.586857  426243 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:34:52.586926  426243 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:34:52.586932  426243 kubeadm.go:310] 
	I0127 13:34:52.587010  426243 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587095  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 13:34:52.587119  426243 kubeadm.go:310] 	--control-plane 
	I0127 13:34:52.587125  426243 kubeadm.go:310] 
	I0127 13:34:52.587196  426243 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:34:52.587204  426243 kubeadm.go:310] 
	I0127 13:34:52.587272  426243 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587400  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 13:34:52.587418  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:34:52.587432  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:52.588976  426243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:34:50.482735  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Start
	I0127 13:34:50.482923  429070 main.go:141] libmachine: (newest-cni-639843) starting domain...
	I0127 13:34:50.482942  429070 main.go:141] libmachine: (newest-cni-639843) ensuring networks are active...
	I0127 13:34:50.483967  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network default is active
	I0127 13:34:50.484412  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network mk-newest-cni-639843 is active
	I0127 13:34:50.484881  429070 main.go:141] libmachine: (newest-cni-639843) getting domain XML...
	I0127 13:34:50.485667  429070 main.go:141] libmachine: (newest-cni-639843) creating domain...
	I0127 13:34:51.790885  429070 main.go:141] libmachine: (newest-cni-639843) waiting for IP...
	I0127 13:34:51.792240  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:51.793056  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:51.793082  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:51.792897  429104 retry.go:31] will retry after 310.654811ms: waiting for domain to come up
	I0127 13:34:52.105667  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.106457  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.106639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.106581  429104 retry.go:31] will retry after 280.140783ms: waiting for domain to come up
	I0127 13:34:52.388057  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.388616  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.388639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.388575  429104 retry.go:31] will retry after 317.414736ms: waiting for domain to come up
	I0127 13:34:52.708208  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.708845  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.708880  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.708795  429104 retry.go:31] will retry after 475.980482ms: waiting for domain to come up
	I0127 13:34:53.186613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.187252  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.187320  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.187240  429104 retry.go:31] will retry after 619.306112ms: waiting for domain to come up
	I0127 13:34:53.807794  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.808436  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.808485  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.808365  429104 retry.go:31] will retry after 838.158661ms: waiting for domain to come up
	I0127 13:34:54.647849  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:54.648442  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:54.648465  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:54.648411  429104 retry.go:31] will retry after 739.028542ms: waiting for domain to come up
	I0127 13:34:51.475609  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:51.489500  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:51.489579  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:51.536219  427154 cri.go:89] found id: ""
	I0127 13:34:51.536250  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.536262  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:51.536270  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:51.536334  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:51.577494  427154 cri.go:89] found id: ""
	I0127 13:34:51.577522  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.577536  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:51.577543  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:51.577606  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:51.614430  427154 cri.go:89] found id: ""
	I0127 13:34:51.614463  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.614476  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:51.614484  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:51.614602  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:51.666530  427154 cri.go:89] found id: ""
	I0127 13:34:51.666582  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.666591  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:51.666597  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:51.666653  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:51.705538  427154 cri.go:89] found id: ""
	I0127 13:34:51.705567  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.705579  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:51.705587  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:51.705645  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:51.743604  427154 cri.go:89] found id: ""
	I0127 13:34:51.743638  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.743650  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:51.743658  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:51.743721  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:51.778029  427154 cri.go:89] found id: ""
	I0127 13:34:51.778058  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.778070  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:51.778078  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:51.778148  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:51.819260  427154 cri.go:89] found id: ""
	I0127 13:34:51.819294  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.819307  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:51.819321  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:51.819338  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:51.887511  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:51.887552  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:51.904227  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:51.904261  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:51.980655  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:51.980684  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:51.980699  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:52.085922  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:52.085973  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:54.642029  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:54.655922  427154 kubeadm.go:597] duration metric: took 4m4.240008337s to restartPrimaryControlPlane
	W0127 13:34:54.656192  427154 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:34:54.656244  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:34:52.590276  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:34:52.604204  426243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:34:52.631515  426243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:34:52.631609  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:52.631702  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-174381 minikube.k8s.io/updated_at=2025_01_27T13_34_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-174381 minikube.k8s.io/primary=true
	I0127 13:34:52.663541  426243 ops.go:34] apiserver oom_adj: -16
	I0127 13:34:52.870691  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.371756  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.871386  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.371644  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.871179  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.370747  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.871458  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.371676  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.870824  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.982232  426243 kubeadm.go:1113] duration metric: took 4.350694221s to wait for elevateKubeSystemPrivileges
	I0127 13:34:56.982281  426243 kubeadm.go:394] duration metric: took 6m1.699030467s to StartCluster
	I0127 13:34:56.982314  426243 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.982426  426243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:56.983746  426243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.984032  426243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:34:56.984111  426243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:34:56.984230  426243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-174381"
	I0127 13:34:56.984249  426243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-174381"
	W0127 13:34:56.984258  426243 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:34:56.984273  426243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-174381"
	I0127 13:34:56.984292  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984300  426243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-174381"
	I0127 13:34:56.984303  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:56.984359  426243 addons.go:69] Setting dashboard=true in profile "embed-certs-174381"
	I0127 13:34:56.984372  426243 addons.go:238] Setting addon dashboard=true in "embed-certs-174381"
	W0127 13:34:56.984381  426243 addons.go:247] addon dashboard should already be in state true
	I0127 13:34:56.984405  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984450  426243 addons.go:69] Setting metrics-server=true in profile "embed-certs-174381"
	I0127 13:34:56.984484  426243 addons.go:238] Setting addon metrics-server=true in "embed-certs-174381"
	W0127 13:34:56.984494  426243 addons.go:247] addon metrics-server should already be in state true
	I0127 13:34:56.984524  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984760  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984778  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984799  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984801  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984812  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984826  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984943  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984977  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.986354  426243 out.go:177] * Verifying Kubernetes components...
	I0127 13:34:56.988314  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:34:57.003008  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0127 13:34:57.003716  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.003737  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0127 13:34:57.004011  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0127 13:34:57.004163  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004169  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0127 13:34:57.004457  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004482  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.004559  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004638  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004651  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004670  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005012  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005085  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005111  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005198  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.005324  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005340  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005955  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005969  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005970  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.006577  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006617  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.006912  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006964  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.007601  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.007633  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.009217  426243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-174381"
	W0127 13:34:57.009239  426243 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:34:57.009268  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:57.009605  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.009648  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.027242  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0127 13:34:57.027495  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0127 13:34:57.027644  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.027844  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.028181  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028198  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028301  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028318  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028539  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028633  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.028808  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.029068  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0127 13:34:57.029543  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.030162  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.030190  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.030581  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.030601  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031166  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.031207  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.031430  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031637  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0127 13:34:57.031993  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.032625  426243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:34:57.032750  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.032765  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.033302  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.033477  426243 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:34:57.033498  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.033587  426243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.033607  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:34:57.033627  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.035541  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.035761  426243 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036794  426243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036804  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:34:57.036814  426243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:34:57.036833  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.037349  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.037808  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:34:57.037827  426243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:34:57.037856  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.038015  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.038042  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.038208  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.038375  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.038561  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.038701  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.041035  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041500  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.041519  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041915  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042008  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042189  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042254  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.042272  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042592  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042583  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.042727  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042852  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.055810  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0127 13:34:57.056237  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.056772  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.056801  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.057165  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.057501  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.059165  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.059398  426243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.059418  426243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:34:57.059437  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.062703  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063236  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.063266  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063369  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.063544  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.063694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.063831  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.242347  426243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:34:57.326178  426243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352801  426243 node_ready.go:49] node "embed-certs-174381" has status "Ready":"True"
	I0127 13:34:57.352828  426243 node_ready.go:38] duration metric: took 26.613856ms for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352841  426243 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:34:57.368293  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:57.372941  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:34:57.372962  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:34:57.391676  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.418587  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:34:57.418616  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:34:57.446588  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.460844  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:34:57.460869  426243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:34:57.507947  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:34:57.507976  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:34:57.542669  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.542701  426243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:34:57.630641  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:34:57.630672  426243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:34:57.639506  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.693463  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:34:57.693498  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:34:57.806045  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:34:57.806082  426243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:34:57.930058  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:34:57.930101  426243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:34:58.055263  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:34:58.055295  426243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:34:58.110576  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:34:58.110609  426243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:34:58.202270  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.202305  426243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:34:58.293311  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.514356  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067720868s)
	I0127 13:34:58.514435  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514450  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.514846  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.514876  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.514894  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.514909  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514920  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.515161  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.515197  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.515860  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516243  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124532885s)
	I0127 13:34:58.516270  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516281  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.516739  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.516757  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.516768  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516776  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516787  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.517207  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.517230  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549206  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.549228  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.549614  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.549638  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549648  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260116  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.620545789s)
	I0127 13:34:59.260244  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260271  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.260620  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260713  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.260730  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.260746  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260761  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.261011  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.261041  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.261061  426243 addons.go:479] Verifying addon metrics-server=true in "embed-certs-174381"
	I0127 13:34:59.395546  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.395572  426243 pod_ready.go:82] duration metric: took 2.027244475s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.395586  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407673  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.407695  426243 pod_ready.go:82] duration metric: took 12.102291ms for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407705  426243 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417168  426243 pod_ready.go:93] pod "etcd-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.417190  426243 pod_ready.go:82] duration metric: took 9.47928ms for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417199  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:00.168433  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.875044372s)
	I0127 13:35:00.168496  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.168520  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.168866  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.170590  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.170645  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.170666  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.170673  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.171042  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.171132  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.171105  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.172686  426243 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-174381 addons enable metrics-server
	
	I0127 13:35:00.174376  426243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:34:59.517968  427154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.861694115s)
	I0127 13:34:59.518062  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:34:59.536180  427154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:34:59.547986  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:34:59.561566  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:34:59.561591  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:34:59.561645  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:34:59.574802  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:34:59.574872  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:34:59.588185  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:34:59.598292  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:34:59.598356  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:34:59.608921  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.621764  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:34:59.621825  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.635526  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:34:59.646582  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:34:59.646644  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:34:59.657975  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:34:59.745239  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:34:59.745337  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:59.946676  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:59.946890  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:59.947050  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:35:00.183580  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:55.388471  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:55.388933  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:55.388964  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:55.388914  429104 retry.go:31] will retry after 1.346738272s: waiting for domain to come up
	I0127 13:34:56.737433  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:56.738024  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:56.738081  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:56.738007  429104 retry.go:31] will retry after 1.120347472s: waiting for domain to come up
	I0127 13:34:57.860265  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:57.860912  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:57.860943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:57.860882  429104 retry.go:31] will retry after 2.152534572s: waiting for domain to come up
	I0127 13:35:00.015953  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:00.016579  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:00.016613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:00.016544  429104 retry.go:31] will retry after 2.588698804s: waiting for domain to come up
	I0127 13:35:00.184950  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:35:00.185049  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:35:00.185140  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:35:00.185334  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:35:00.185435  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:35:00.186094  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:35:00.186301  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:35:00.187022  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:35:00.187455  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:35:00.187928  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:35:00.188334  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:35:00.188531  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:35:00.188608  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:35:00.344156  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:35:00.836083  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:35:00.964664  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:35:01.072929  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:35:01.092946  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:35:01.097538  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:35:01.097961  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:35:01.292953  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:35:00.175566  426243 addons.go:514] duration metric: took 3.191465201s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:35:01.424773  426243 pod_ready.go:103] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"False"
	I0127 13:35:01.924012  426243 pod_ready.go:93] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:01.924044  426243 pod_ready.go:82] duration metric: took 2.506836977s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:01.924057  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.607848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:02.608639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:02.608669  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:02.608620  429104 retry.go:31] will retry after 2.763044938s: waiting for domain to come up
	I0127 13:35:01.294375  427154 out.go:235]   - Booting up control plane ...
	I0127 13:35:01.294569  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:35:01.306014  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:35:01.309847  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:35:01.310062  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:35:01.312436  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:35:02.931062  426243 pod_ready.go:93] pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.931095  426243 pod_ready.go:82] duration metric: took 1.007026875s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.931108  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936917  426243 pod_ready.go:93] pod "kube-proxy-cjsf9" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.936945  426243 pod_ready.go:82] duration metric: took 5.828276ms for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936957  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444155  426243 pod_ready.go:93] pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:03.444192  426243 pod_ready.go:82] duration metric: took 507.225554ms for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444203  426243 pod_ready.go:39] duration metric: took 6.091349359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:35:03.444226  426243 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:03.444294  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:03.488162  426243 api_server.go:72] duration metric: took 6.504085901s to wait for apiserver process to appear ...
	I0127 13:35:03.488197  426243 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:03.488224  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:35:03.493586  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 13:35:03.494867  426243 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:03.494894  426243 api_server.go:131] duration metric: took 6.689991ms to wait for apiserver health ...
	I0127 13:35:03.494903  426243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:03.575835  426243 system_pods.go:59] 9 kube-system pods found
	I0127 13:35:03.575871  426243 system_pods.go:61] "coredns-668d6bf9bc-9ncnm" [8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7] Running
	I0127 13:35:03.575877  426243 system_pods.go:61] "coredns-668d6bf9bc-hjncm" [68641e50-9f99-4811-9752-c7dc0db47502] Running
	I0127 13:35:03.575881  426243 system_pods.go:61] "etcd-embed-certs-174381" [fc5cb0ba-724d-4b3d-a6d0-65644ed57d99] Running
	I0127 13:35:03.575886  426243 system_pods.go:61] "kube-apiserver-embed-certs-174381" [7afdc2d3-86bd-480d-a081-e1475ff21346] Running
	I0127 13:35:03.575890  426243 system_pods.go:61] "kube-controller-manager-embed-certs-174381" [fa410171-2b30-4c79-97d4-87c1549fd75c] Running
	I0127 13:35:03.575894  426243 system_pods.go:61] "kube-proxy-cjsf9" [c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6] Running
	I0127 13:35:03.575901  426243 system_pods.go:61] "kube-scheduler-embed-certs-174381" [ab92b381-fb78-4aa1-bc55-4e47a58f2c32] Running
	I0127 13:35:03.575908  426243 system_pods.go:61] "metrics-server-f79f97bbb-hxlwf" [cb779c78-85f9-48e7-88c3-f087f57547e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:03.575913  426243 system_pods.go:61] "storage-provisioner" [3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1] Running
	I0127 13:35:03.575922  426243 system_pods.go:74] duration metric: took 81.012821ms to wait for pod list to return data ...
	I0127 13:35:03.575931  426243 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:03.772597  426243 default_sa.go:45] found service account: "default"
	I0127 13:35:03.772641  426243 default_sa.go:55] duration metric: took 196.700969ms for default service account to be created ...
	I0127 13:35:03.772655  426243 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:35:03.976966  426243 system_pods.go:87] 9 kube-system pods found
	I0127 13:35:05.375624  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:05.376167  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:05.376199  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:05.376124  429104 retry.go:31] will retry after 2.824398155s: waiting for domain to come up
	I0127 13:35:08.203385  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:08.203848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:08.203881  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:08.203823  429104 retry.go:31] will retry after 4.529537578s: waiting for domain to come up
	I0127 13:35:12.735786  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736343  429070 main.go:141] libmachine: (newest-cni-639843) found domain IP: 192.168.50.22
	I0127 13:35:12.736364  429070 main.go:141] libmachine: (newest-cni-639843) reserving static IP address...
	I0127 13:35:12.736378  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has current primary IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736707  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.736748  429070 main.go:141] libmachine: (newest-cni-639843) reserved static IP address 192.168.50.22 for domain newest-cni-639843
	I0127 13:35:12.736770  429070 main.go:141] libmachine: (newest-cni-639843) DBG | skip adding static IP to network mk-newest-cni-639843 - found existing host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"}
	I0127 13:35:12.736785  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Getting to WaitForSSH function...
	I0127 13:35:12.736810  429070 main.go:141] libmachine: (newest-cni-639843) waiting for SSH...
	I0127 13:35:12.739230  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739563  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.739592  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739721  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH client type: external
	I0127 13:35:12.739746  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa (-rw-------)
	I0127 13:35:12.739781  429070 main.go:141] libmachine: (newest-cni-639843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:35:12.739791  429070 main.go:141] libmachine: (newest-cni-639843) DBG | About to run SSH command:
	I0127 13:35:12.739800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | exit 0
	I0127 13:35:12.866664  429070 main.go:141] libmachine: (newest-cni-639843) DBG | SSH cmd err, output: <nil>: 
	I0127 13:35:12.867059  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetConfigRaw
	I0127 13:35:12.867776  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:12.870461  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.870943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.870979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.871221  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:35:12.871401  429070 machine.go:93] provisionDockerMachine start ...
	I0127 13:35:12.871421  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:12.871618  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.873979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874373  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.874411  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874581  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.874746  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.874903  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.875063  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.875221  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.875426  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.875440  429070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:35:12.979102  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:35:12.979140  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979406  429070 buildroot.go:166] provisioning hostname "newest-cni-639843"
	I0127 13:35:12.979435  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.982631  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983000  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.983025  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983170  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.983324  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983447  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983605  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.983809  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.984033  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.984051  429070 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-639843 && echo "newest-cni-639843" | sudo tee /etc/hostname
	I0127 13:35:13.107964  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-639843
	
	I0127 13:35:13.108004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.111168  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111589  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.111617  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111790  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.111995  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112158  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112289  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.112481  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.112709  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.112733  429070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-639843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-639843/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-639843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:35:13.226643  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:35:13.226683  429070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:35:13.226734  429070 buildroot.go:174] setting up certificates
	I0127 13:35:13.226749  429070 provision.go:84] configureAuth start
	I0127 13:35:13.226767  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:13.227060  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:13.230284  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230719  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.230752  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.233444  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233798  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.233832  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233972  429070 provision.go:143] copyHostCerts
	I0127 13:35:13.234039  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:35:13.234053  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:35:13.234146  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:35:13.234301  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:35:13.234313  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:35:13.234354  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:35:13.234450  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:35:13.234462  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:35:13.234497  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:35:13.234598  429070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.newest-cni-639843 san=[127.0.0.1 192.168.50.22 localhost minikube newest-cni-639843]
	I0127 13:35:13.505038  429070 provision.go:177] copyRemoteCerts
	I0127 13:35:13.505119  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:35:13.505154  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.508162  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508530  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.508555  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508759  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.508944  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.509117  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.509267  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:13.595888  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:35:13.621151  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:35:13.647473  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:35:13.673605  429070 provision.go:87] duration metric: took 446.83901ms to configureAuth
	I0127 13:35:13.673655  429070 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:35:13.673889  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:13.674004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.676982  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677392  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.677421  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677573  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.677762  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.677972  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.678123  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.678273  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.678496  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.678527  429070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:35:13.921465  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:35:13.921494  429070 machine.go:96] duration metric: took 1.050079095s to provisionDockerMachine
	I0127 13:35:13.921510  429070 start.go:293] postStartSetup for "newest-cni-639843" (driver="kvm2")
	I0127 13:35:13.921522  429070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:35:13.921543  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:13.921954  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:35:13.922025  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.925574  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.925941  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.926012  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.926266  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.926493  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.926675  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.926888  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.014753  429070 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:35:14.019344  429070 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:35:14.019374  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:35:14.019439  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:35:14.019540  429070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:35:14.019659  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:35:14.031277  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:14.060121  429070 start.go:296] duration metric: took 138.59357ms for postStartSetup
	I0127 13:35:14.060165  429070 fix.go:56] duration metric: took 23.600678344s for fixHost
	I0127 13:35:14.060188  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.063145  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063514  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.063542  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063761  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.063980  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064176  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064340  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.064541  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:14.064724  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:14.064738  429070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:35:14.172785  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984914.150810987
	
	I0127 13:35:14.172823  429070 fix.go:216] guest clock: 1737984914.150810987
	I0127 13:35:14.172832  429070 fix.go:229] Guest: 2025-01-27 13:35:14.150810987 +0000 UTC Remote: 2025-01-27 13:35:14.060169498 +0000 UTC m=+23.763612053 (delta=90.641489ms)
	I0127 13:35:14.172889  429070 fix.go:200] guest clock delta is within tolerance: 90.641489ms
	I0127 13:35:14.172905  429070 start.go:83] releasing machines lock for "newest-cni-639843", held for 23.713435883s
	I0127 13:35:14.172938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.173202  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:14.176163  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176559  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.176600  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177182  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177351  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177450  429070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:35:14.177498  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.177596  429070 ssh_runner.go:195] Run: cat /version.json
	I0127 13:35:14.177625  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.180456  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180561  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180838  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180910  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.180914  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180944  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.181150  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.181189  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181344  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181357  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181546  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181536  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.181739  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.283980  429070 ssh_runner.go:195] Run: systemctl --version
	I0127 13:35:14.290329  429070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:35:14.450608  429070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:35:14.461512  429070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:35:14.461597  429070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:35:14.482924  429070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:35:14.482951  429070 start.go:495] detecting cgroup driver to use...
	I0127 13:35:14.483022  429070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:35:14.503452  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:35:14.517592  429070 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:35:14.517659  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:35:14.532792  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:35:14.547306  429070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:35:14.671116  429070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:35:14.818034  429070 docker.go:233] disabling docker service ...
	I0127 13:35:14.818133  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:35:14.832550  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:35:14.845137  429070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:35:14.986833  429070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:35:15.122943  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:35:15.137706  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:35:15.157591  429070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:35:15.157669  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.168185  429070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:35:15.168268  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.178876  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.188792  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.198951  429070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:35:15.209169  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.219549  429070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.238633  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.249729  429070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:35:15.259178  429070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:35:15.259244  429070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:35:15.272097  429070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:35:15.281611  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:15.403472  429070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:35:15.498842  429070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:35:15.498928  429070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:35:15.505405  429070 start.go:563] Will wait 60s for crictl version
	I0127 13:35:15.505478  429070 ssh_runner.go:195] Run: which crictl
	I0127 13:35:15.509869  429070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:35:15.580026  429070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:35:15.580122  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.609376  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.643173  429070 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:35:15.644483  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:15.647483  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.647905  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:15.647930  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.648148  429070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:35:15.652911  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:15.668696  429070 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:35:15.670127  429070 kubeadm.go:883] updating cluster {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:35:15.670264  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:35:15.670328  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:15.716362  429070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:35:15.716455  429070 ssh_runner.go:195] Run: which lz4
	I0127 13:35:15.721254  429070 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:35:15.727443  429070 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:35:15.727478  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 13:35:17.208454  429070 crio.go:462] duration metric: took 1.487249966s to copy over tarball
	I0127 13:35:17.208542  429070 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:35:19.421239  429070 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.212662568s)
	I0127 13:35:19.421271  429070 crio.go:469] duration metric: took 2.21278342s to extract the tarball
	I0127 13:35:19.421281  429070 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:35:19.461756  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:19.504974  429070 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:35:19.505005  429070 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:35:19.505015  429070 kubeadm.go:934] updating node { 192.168.50.22 8443 v1.32.1 crio true true} ...
	I0127 13:35:19.505173  429070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-639843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:35:19.505269  429070 ssh_runner.go:195] Run: crio config
	I0127 13:35:19.556732  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:19.556754  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:19.556766  429070 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:35:19.556791  429070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.22 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639843 NodeName:newest-cni-639843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:35:19.556951  429070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:35:19.557032  429070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:35:19.567405  429070 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:35:19.567483  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:35:19.577572  429070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 13:35:19.595555  429070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:35:19.612336  429070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 13:35:19.630199  429070 ssh_runner.go:195] Run: grep 192.168.50.22	control-plane.minikube.internal$ /etc/hosts
	I0127 13:35:19.634268  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:19.646912  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:19.764087  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:19.783083  429070 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843 for IP: 192.168.50.22
	I0127 13:35:19.783115  429070 certs.go:194] generating shared ca certs ...
	I0127 13:35:19.783139  429070 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:19.783330  429070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:35:19.783386  429070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:35:19.783400  429070 certs.go:256] generating profile certs ...
	I0127 13:35:19.783534  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/client.key
	I0127 13:35:19.783619  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key.505bfb94
	I0127 13:35:19.783671  429070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key
	I0127 13:35:19.783826  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:35:19.783866  429070 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:35:19.783880  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:35:19.783913  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:35:19.783939  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:35:19.783961  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:35:19.784010  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:19.784667  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:35:19.821550  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:35:19.860184  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:35:19.893311  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:35:19.926181  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:35:19.954565  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:35:19.997938  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:35:20.022058  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:35:20.045748  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:35:20.069279  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:35:20.092959  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:35:20.117180  429070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:35:20.135202  429070 ssh_runner.go:195] Run: openssl version
	I0127 13:35:20.141197  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:35:20.152160  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156810  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156871  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.162645  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:35:20.174920  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:35:20.187426  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192129  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192174  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.198019  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:35:20.210195  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:35:20.220934  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225588  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225622  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.231516  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:35:20.243779  429070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:35:20.248511  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:35:20.254523  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:35:20.260441  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:35:20.266429  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:35:20.272290  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:35:20.278051  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:35:20.284024  429070 kubeadm.go:392] StartCluster: {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:35:20.284105  429070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:35:20.284164  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.332523  429070 cri.go:89] found id: ""
	I0127 13:35:20.332587  429070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:35:20.344932  429070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:35:20.344959  429070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:35:20.345011  429070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:35:20.355729  429070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:35:20.356795  429070 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-639843" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:20.357505  429070 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-639843" cluster setting kubeconfig missing "newest-cni-639843" context setting]
	I0127 13:35:20.358374  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:20.360037  429070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:35:20.371572  429070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.22
	I0127 13:35:20.371606  429070 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:35:20.371622  429070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:35:20.371679  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.418797  429070 cri.go:89] found id: ""
	I0127 13:35:20.418873  429070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:35:20.437304  429070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:35:20.447636  429070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:35:20.447660  429070 kubeadm.go:157] found existing configuration files:
	
	I0127 13:35:20.447704  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:35:20.458280  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:35:20.458335  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:35:20.469304  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:35:20.478639  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:35:20.478689  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:35:20.488624  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.497867  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:35:20.497908  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.507379  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:35:20.516362  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:35:20.516416  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:35:20.525787  429070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:35:20.542646  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:20.671597  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.498726  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.899789  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.965210  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:22.062165  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:22.062252  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:22.563318  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.063066  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.082649  429070 api_server.go:72] duration metric: took 1.020482627s to wait for apiserver process to appear ...
	I0127 13:35:23.082686  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:23.082711  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:23.083244  429070 api_server.go:269] stopped: https://192.168.50.22:8443/healthz: Get "https://192.168.50.22:8443/healthz": dial tcp 192.168.50.22:8443: connect: connection refused
	I0127 13:35:23.583699  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.503776  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:35:25.503807  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:35:25.503825  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.547403  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.547434  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:25.583659  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.589328  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.589357  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.082833  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.087881  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.087908  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.583159  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.592115  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.592148  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:27.083703  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.090407  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.098905  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.098928  429070 api_server.go:131] duration metric: took 4.01623437s to wait for apiserver health ...
	I0127 13:35:27.098938  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:27.098944  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:27.100651  429070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:35:27.101855  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:35:27.116286  429070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:35:27.139348  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.158680  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.158717  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.158730  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.158741  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.158748  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.158757  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:35:27.158766  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.158776  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.158785  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:35:27.158819  429070 system_pods.go:74] duration metric: took 19.446392ms to wait for pod list to return data ...
	I0127 13:35:27.158832  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.168338  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.168376  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.168392  429070 node_conditions.go:105] duration metric: took 9.550643ms to run NodePressure ...
	I0127 13:35:27.168416  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:27.459759  429070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:35:27.473184  429070 ops.go:34] apiserver oom_adj: -16
	I0127 13:35:27.473212  429070 kubeadm.go:597] duration metric: took 7.128244476s to restartPrimaryControlPlane
	I0127 13:35:27.473226  429070 kubeadm.go:394] duration metric: took 7.18920723s to StartCluster
	I0127 13:35:27.473251  429070 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.473341  429070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:27.475111  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.475373  429070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:35:27.475451  429070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:35:27.475562  429070 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-639843"
	I0127 13:35:27.475584  429070 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-639843"
	W0127 13:35:27.475598  429070 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:35:27.475598  429070 addons.go:69] Setting dashboard=true in profile "newest-cni-639843"
	I0127 13:35:27.475600  429070 addons.go:69] Setting metrics-server=true in profile "newest-cni-639843"
	I0127 13:35:27.475621  429070 addons.go:238] Setting addon dashboard=true in "newest-cni-639843"
	I0127 13:35:27.475629  429070 addons.go:238] Setting addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:27.475639  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	W0127 13:35:27.475643  429070 addons.go:247] addon metrics-server should already be in state true
	I0127 13:35:27.475676  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.475582  429070 addons.go:69] Setting default-storageclass=true in profile "newest-cni-639843"
	I0127 13:35:27.475611  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:27.475708  429070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-639843"
	W0127 13:35:27.475630  429070 addons.go:247] addon dashboard should already be in state true
	I0127 13:35:27.475812  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.476070  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476077  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476115  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476134  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476159  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476168  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476195  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476204  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.477011  429070 out.go:177] * Verifying Kubernetes components...
	I0127 13:35:27.478509  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:27.493703  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0127 13:35:27.493801  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I0127 13:35:27.493955  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0127 13:35:27.494221  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494259  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494795  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494819  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.494840  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494932  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494956  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495188  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495296  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495464  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.495481  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495764  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495798  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495812  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495819  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495871  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.496119  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0127 13:35:27.496433  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.496529  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.496572  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.496893  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.496916  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.497264  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.497502  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.502029  429070 addons.go:238] Setting addon default-storageclass=true in "newest-cni-639843"
	W0127 13:35:27.502051  429070 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:35:27.502080  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.502830  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.502873  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.512816  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I0127 13:35:27.513096  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0127 13:35:27.513275  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I0127 13:35:27.535151  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535226  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535266  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535748  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535766  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.535769  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535791  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.536087  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536347  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536392  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.536559  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.537321  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.537343  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.537676  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.537946  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.538406  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539127  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539700  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.540468  429070 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:35:27.540479  429070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:35:27.541259  429070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:35:27.542133  429070 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.542154  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:35:27.542174  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.542782  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:35:27.542801  429070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:35:27.542820  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.543610  429070 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:35:27.544743  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:35:27.544762  429070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:35:27.544780  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.545935  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546330  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.546364  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546495  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.546708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.546872  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.547017  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.547822  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548084  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548244  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548291  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548448  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548585  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548619  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548786  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548800  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.548938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548980  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.549036  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.549180  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.554799  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0127 13:35:27.555253  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.555780  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.555800  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.556187  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.556616  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.556646  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.574277  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0127 13:35:27.574815  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.575396  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.575420  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.575741  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.575966  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.577346  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.577556  429070 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.577574  429070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:35:27.577594  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.580061  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580408  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.580432  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580659  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.580836  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.580987  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.581148  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.713210  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:27.737971  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:27.738049  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:27.755609  429070 api_server.go:72] duration metric: took 280.198045ms to wait for apiserver process to appear ...
	I0127 13:35:27.755639  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:27.755660  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.765216  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.767614  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.767639  429070 api_server.go:131] duration metric: took 11.991322ms to wait for apiserver health ...
	I0127 13:35:27.767650  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.781696  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.781778  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.781799  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.781815  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.781827  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.781836  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running
	I0127 13:35:27.781862  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.781874  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.781884  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running
	I0127 13:35:27.781895  429070 system_pods.go:74] duration metric: took 14.236485ms to wait for pod list to return data ...
	I0127 13:35:27.781908  429070 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:27.787854  429070 default_sa.go:45] found service account: "default"
	I0127 13:35:27.787884  429070 default_sa.go:55] duration metric: took 5.965578ms for default service account to be created ...
	I0127 13:35:27.787899  429070 kubeadm.go:582] duration metric: took 312.493014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:35:27.787924  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.793927  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.793949  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.793961  429070 node_conditions.go:105] duration metric: took 6.028431ms to run NodePressure ...
	I0127 13:35:27.793975  429070 start.go:241] waiting for startup goroutines ...
	I0127 13:35:27.806081  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.851437  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.912936  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:35:27.912967  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:35:27.941546  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:35:27.941579  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:35:28.017628  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:35:28.017663  429070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:35:28.027973  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:35:28.028016  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:35:28.097111  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.097146  429070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:35:28.148404  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:35:28.148439  429070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:35:28.272234  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.273446  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:35:28.273473  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:35:28.324863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:35:28.324897  429070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:35:28.400474  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:35:28.400504  429070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:35:28.460550  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:35:28.460583  429070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:35:28.508999  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:35:28.509031  429070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:35:28.555538  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555570  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.555889  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.555906  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.555915  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555923  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.556151  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.556180  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.556196  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.564252  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.564277  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.564553  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.564574  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.564893  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.605863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:28.605896  429070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:35:28.650259  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:29.517093  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.66560932s)
	I0127 13:35:29.517160  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517173  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517607  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:29.517645  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517655  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.517664  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517672  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517974  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517996  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741184  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46890411s)
	I0127 13:35:29.741241  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741252  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.741558  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.741576  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741586  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741609  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.742656  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.742680  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.742692  429070 addons.go:479] Verifying addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:29.742659  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:30.069134  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.418812542s)
	I0127 13:35:30.069214  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069233  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069539  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069559  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.069568  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069575  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069840  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069856  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.071209  429070 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-639843 addons enable metrics-server
	
	I0127 13:35:30.072569  429070 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:35:30.073970  429070 addons.go:514] duration metric: took 2.598533083s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:35:30.074007  429070 start.go:246] waiting for cluster config update ...
	I0127 13:35:30.074019  429070 start.go:255] writing updated cluster config ...
	I0127 13:35:30.074258  429070 ssh_runner.go:195] Run: rm -f paused
	I0127 13:35:30.125745  429070 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:35:30.127324  429070 out.go:177] * Done! kubectl is now configured to use "newest-cni-639843" cluster and "default" namespace by default
	I0127 13:35:41.313958  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:35:41.315406  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:41.315596  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:46.316260  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:46.316520  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:56.316974  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:56.317208  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:16.318338  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:16.318524  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320677  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:56.320945  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320963  427154 kubeadm.go:310] 
	I0127 13:36:56.321020  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:36:56.321085  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:36:56.321099  427154 kubeadm.go:310] 
	I0127 13:36:56.321165  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:36:56.321228  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:36:56.321357  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:36:56.321378  427154 kubeadm.go:310] 
	I0127 13:36:56.321499  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:36:56.321545  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:36:56.321574  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:36:56.321580  427154 kubeadm.go:310] 
	I0127 13:36:56.321720  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:36:56.321827  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:36:56.321840  427154 kubeadm.go:310] 
	I0127 13:36:56.321935  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:36:56.322018  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:36:56.322099  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:36:56.322162  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:36:56.322169  427154 kubeadm.go:310] 
	I0127 13:36:56.323303  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:36:56.323399  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:36:56.323478  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 13:36:56.323617  427154 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:36:56.323664  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:36:56.804696  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:36:56.819996  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:36:56.830103  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:36:56.830120  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:36:56.830161  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:36:56.839297  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:36:56.839351  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:36:56.848603  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:36:56.857433  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:36:56.857500  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:36:56.867735  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.876669  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:36:56.876721  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.885857  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:36:56.894734  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:36:56.894788  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:36:56.904112  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:36:56.975515  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:36:56.975724  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:36:57.110596  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:36:57.110748  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:36:57.110890  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:36:57.287182  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:36:57.289124  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:36:57.289247  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:36:57.289310  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:36:57.289405  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:36:57.289504  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:36:57.289595  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:36:57.289665  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:36:57.289780  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:36:57.290345  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:36:57.291337  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:36:57.292274  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:36:57.292554  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:36:57.292622  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:36:57.586245  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:36:57.746278  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:36:57.846816  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:36:57.985775  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:36:58.007369  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:36:58.008417  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:36:58.008485  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:36:58.134182  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:36:58.136066  427154 out.go:235]   - Booting up control plane ...
	I0127 13:36:58.136194  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:36:58.148785  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:36:58.148921  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:36:58.149274  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:36:58.153395  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:37:38.155987  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:37:38.156613  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:38.156831  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:43.157356  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:43.157567  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:53.158341  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:53.158675  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:13.158624  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:13.158876  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157583  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:53.157824  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157839  427154 kubeadm.go:310] 
	I0127 13:38:53.157896  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:38:53.157954  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:38:53.157966  427154 kubeadm.go:310] 
	I0127 13:38:53.158014  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:38:53.158064  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:38:53.158222  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:38:53.158234  427154 kubeadm.go:310] 
	I0127 13:38:53.158404  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:38:53.158453  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:38:53.158483  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:38:53.158491  427154 kubeadm.go:310] 
	I0127 13:38:53.158624  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:38:53.158726  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:38:53.158741  427154 kubeadm.go:310] 
	I0127 13:38:53.158894  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:38:53.159040  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:38:53.159165  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:38:53.159264  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:38:53.159275  427154 kubeadm.go:310] 
	I0127 13:38:53.159902  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:38:53.160042  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:38:53.160128  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:38:53.160213  427154 kubeadm.go:394] duration metric: took 8m2.798471593s to StartCluster
	I0127 13:38:53.160286  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:38:53.160377  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:38:53.205471  427154 cri.go:89] found id: ""
	I0127 13:38:53.205496  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.205504  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:38:53.205510  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:38:53.205577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:38:53.240500  427154 cri.go:89] found id: ""
	I0127 13:38:53.240532  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.240543  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:38:53.240564  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:38:53.240625  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:38:53.282232  427154 cri.go:89] found id: ""
	I0127 13:38:53.282267  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.282279  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:38:53.282287  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:38:53.282354  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:38:53.315589  427154 cri.go:89] found id: ""
	I0127 13:38:53.315643  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.315659  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:38:53.315666  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:38:53.315735  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:38:53.349806  427154 cri.go:89] found id: ""
	I0127 13:38:53.349836  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.349844  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:38:53.349850  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:38:53.349906  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:38:53.382052  427154 cri.go:89] found id: ""
	I0127 13:38:53.382084  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.382095  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:38:53.382103  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:38:53.382176  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:38:53.416057  427154 cri.go:89] found id: ""
	I0127 13:38:53.416091  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.416103  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:38:53.416120  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:38:53.416185  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:38:53.449983  427154 cri.go:89] found id: ""
	I0127 13:38:53.450017  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.450029  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:38:53.450046  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:38:53.450064  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:38:53.498208  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:38:53.498242  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:38:53.552441  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:38:53.552472  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:38:53.567811  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:38:53.567841  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:38:53.646625  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:38:53.646651  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:38:53.646667  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 13:38:53.748675  427154 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:38:53.748747  427154 out.go:270] * 
	W0127 13:38:53.748849  427154 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.748865  427154 out.go:270] * 
	W0127 13:38:53.749670  427154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:38:53.753264  427154 out.go:201] 
	W0127 13:38:53.754315  427154 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.754372  427154 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:38:53.754397  427154 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:38:53.755624  427154 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.767951918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985134767932256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98820391-79be-4aa4-8aef-89cdbdf8d2ae name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.768595755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dab11176-2669-4539-835f-416f9e0eb94f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.768655935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dab11176-2669-4539-835f-416f9e0eb94f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.768686261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dab11176-2669-4539-835f-416f9e0eb94f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.801932868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16208829-bb89-432d-8fd6-6f8f084ebd43 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.802002168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16208829-bb89-432d-8fd6-6f8f084ebd43 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.803711902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8be9e7de-a6f1-4659-8507-7d3024a20d4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.804124166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985134804101980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8be9e7de-a6f1-4659-8507-7d3024a20d4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.804623759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e91dd37d-3ef6-478d-a83b-f29cf66c22f4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.804677201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e91dd37d-3ef6-478d-a83b-f29cf66c22f4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.804706372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e91dd37d-3ef6-478d-a83b-f29cf66c22f4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.836161086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66860060-0176-48bc-8080-1eb8b70f177e name=/runtime.v1.RuntimeService/Version
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.836225098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66860060-0176-48bc-8080-1eb8b70f177e name=/runtime.v1.RuntimeService/Version
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.837145138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2dc261ad-2c9f-4ecb-836f-5b5f9019bbd9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.837482721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985134837464652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dc261ad-2c9f-4ecb-836f-5b5f9019bbd9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.837969137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98349f94-a41b-4d50-bf8d-7f14e1ab7767 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.838015526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98349f94-a41b-4d50-bf8d-7f14e1ab7767 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.838046753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=98349f94-a41b-4d50-bf8d-7f14e1ab7767 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.868610069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a0b2a89-417c-4715-ae11-769fd415d5ca name=/runtime.v1.RuntimeService/Version
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.868698178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a0b2a89-417c-4715-ae11-769fd415d5ca name=/runtime.v1.RuntimeService/Version
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.870223881Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=437a4c0d-dfa3-4971-afca-2166424c1c89 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.870552770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985134870533482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=437a4c0d-dfa3-4971-afca-2166424c1c89 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.871433058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aad247d-53fa-4f36-91f2-be31fa698f71 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.871520954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aad247d-53fa-4f36-91f2-be31fa698f71 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:38:54 old-k8s-version-838260 crio[636]: time="2025-01-27 13:38:54.871570087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5aad247d-53fa-4f36-91f2-be31fa698f71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 13:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055193] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042878] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.106143] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.007175] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.630545] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653404] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.059408] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056860] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.174451] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.138641] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.249166] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +7.716430] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.059428] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.921451] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[Jan27 13:31] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 13:35] systemd-fstab-generator[5097]: Ignoring "noauto" option for root device
	[Jan27 13:36] systemd-fstab-generator[5380]: Ignoring "noauto" option for root device
	[  +0.055705] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:38:55 up 8 min,  0 users,  load average: 0.25, 0.16, 0.09
	Linux old-k8s-version-838260 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0000421c0)
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: goroutine 125 [syscall]:
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: syscall.Syscall6(0xe8, 0xc, 0xc000a9fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000a9fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc0000596c0, 0x0, 0x0, 0x0)
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0005c1720)
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jan 27 13:38:53 old-k8s-version-838260 kubelet[5562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jan 27 13:38:53 old-k8s-version-838260 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 13:38:53 old-k8s-version-838260 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 13:38:54 old-k8s-version-838260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 27 13:38:54 old-k8s-version-838260 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 13:38:54 old-k8s-version-838260 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 13:38:54 old-k8s-version-838260 kubelet[5628]: I0127 13:38:54.202867    5628 server.go:416] Version: v1.20.0
	Jan 27 13:38:54 old-k8s-version-838260 kubelet[5628]: I0127 13:38:54.203214    5628 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 13:38:54 old-k8s-version-838260 kubelet[5628]: I0127 13:38:54.205819    5628 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 13:38:54 old-k8s-version-838260 kubelet[5628]: W0127 13:38:54.207411    5628 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 13:38:54 old-k8s-version-838260 kubelet[5628]: I0127 13:38:54.207629    5628 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (224.691038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-838260" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (514.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:39:08.650516  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:39:19.966808  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:39:31.287241  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:39:39.547652  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:41:08.030706  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:41:24.791053  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:41:41.051400  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:41:52.492522  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:42:14.404409  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:42:39.248683  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:43:04.114269  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:43:30.267063  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:43:34.953167  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:43:37.468606  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:44:02.312958  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:44:19.966771  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:44:31.287109  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:44:39.548040  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:44:53.333554  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:44:58.019567  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:45:43.030018  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:45:54.350679  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:46:08.030685  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:46:24.791058  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:46:41.051567  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:47:14.403364  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:47:39.249180  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (239.724843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-838260" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (220.184659ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-838260 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-563155                  | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-563155                                   | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-441438       | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-174381                 | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-174381                                  | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-838260        | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-838260             | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-441438                           | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-639843             | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-639843                  | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-639843 image list                           | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:34:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:34:50.343590  429070 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:34:50.343706  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343717  429070 out.go:358] Setting ErrFile to fd 2...
	I0127 13:34:50.343725  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343905  429070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:34:50.344540  429070 out.go:352] Setting JSON to false
	I0127 13:34:50.345553  429070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22630,"bootTime":1737962260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:34:50.345705  429070 start.go:139] virtualization: kvm guest
	I0127 13:34:50.348432  429070 out.go:177] * [newest-cni-639843] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:34:50.349607  429070 notify.go:220] Checking for updates...
	I0127 13:34:50.349639  429070 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:34:50.350877  429070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:34:50.352137  429070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:50.353523  429070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:34:50.354936  429070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:34:50.356253  429070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:34:50.358120  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:50.358577  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.358648  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.375344  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 13:34:50.375770  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.376385  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.376429  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.376809  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.377061  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.377398  429070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:34:50.377833  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.377889  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.393490  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0127 13:34:50.393954  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.394574  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.394602  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.394931  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.395175  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.432045  429070 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:34:50.433260  429070 start.go:297] selected driver: kvm2
	I0127 13:34:50.433295  429070 start.go:901] validating driver "kvm2" against &{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.433450  429070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:34:50.434521  429070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.434662  429070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:34:50.455080  429070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:34:50.455695  429070 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:34:50.455755  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:34:50.455835  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:50.455908  429070 start.go:340] cluster config:
	{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.456092  429070 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.457706  429070 out.go:177] * Starting "newest-cni-639843" primary control-plane node in "newest-cni-639843" cluster
	I0127 13:34:50.458857  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:34:50.458907  429070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:34:50.458924  429070 cache.go:56] Caching tarball of preloaded images
	I0127 13:34:50.459033  429070 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:34:50.459049  429070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:34:50.459193  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:34:50.459403  429070 start.go:360] acquireMachinesLock for newest-cni-639843: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:34:50.459457  429070 start.go:364] duration metric: took 33.893µs to acquireMachinesLock for "newest-cni-639843"
	I0127 13:34:50.459478  429070 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:34:50.459488  429070 fix.go:54] fixHost starting: 
	I0127 13:34:50.459761  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.459807  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.475245  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0127 13:34:50.475743  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.476455  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.476504  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.476932  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.477227  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.477420  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:34:50.479725  429070 fix.go:112] recreateIfNeeded on newest-cni-639843: state=Stopped err=<nil>
	I0127 13:34:50.479768  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	W0127 13:34:50.479933  429070 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:34:50.481457  429070 out.go:177] * Restarting existing kvm2 VM for "newest-cni-639843" ...
	I0127 13:34:48.302747  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:48.321834  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:48.321899  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:48.370678  427154 cri.go:89] found id: ""
	I0127 13:34:48.370716  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.370732  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:48.370741  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:48.370813  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:48.430514  427154 cri.go:89] found id: ""
	I0127 13:34:48.430655  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.430683  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:48.430702  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:48.430826  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:48.477908  427154 cri.go:89] found id: ""
	I0127 13:34:48.477941  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.477954  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:48.477962  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:48.478036  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:48.532193  427154 cri.go:89] found id: ""
	I0127 13:34:48.532230  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.532242  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:48.532250  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:48.532316  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:48.580627  427154 cri.go:89] found id: ""
	I0127 13:34:48.580658  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.580667  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:48.580673  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:48.580744  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:48.620393  427154 cri.go:89] found id: ""
	I0127 13:34:48.620428  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.620441  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:48.620449  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:48.620518  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:48.662032  427154 cri.go:89] found id: ""
	I0127 13:34:48.662071  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.662079  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:48.662097  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:48.662164  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:48.699662  427154 cri.go:89] found id: ""
	I0127 13:34:48.699697  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.699709  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:48.699723  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:48.699745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:48.752100  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:48.752134  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:48.768121  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:48.768167  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:48.838690  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:48.838718  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:48.838734  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:48.928433  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:48.928471  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:52.576263  426243 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:34:52.576356  426243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:52.576423  426243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:52.576582  426243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:52.576704  426243 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:34:52.576783  426243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:52.578299  426243 out.go:235]   - Generating certificates and keys ...
	I0127 13:34:52.578380  426243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:34:52.578439  426243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:34:52.578509  426243 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:34:52.578594  426243 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:34:52.578701  426243 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:34:52.578757  426243 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:34:52.578818  426243 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:34:52.578870  426243 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:34:52.578962  426243 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:34:52.579063  426243 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:34:52.579111  426243 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:34:52.579164  426243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:34:52.579227  426243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:34:52.579282  426243 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:34:52.579333  426243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:34:52.579387  426243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:34:52.579449  426243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:34:52.579519  426243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:34:52.579604  426243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:34:52.581730  426243 out.go:235]   - Booting up control plane ...
	I0127 13:34:52.581854  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:34:52.581961  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:34:52.582058  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:34:52.582184  426243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:34:52.582253  426243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:34:52.582290  426243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:34:52.582417  426243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:34:52.582554  426243 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:34:52.582651  426243 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002999225s
	I0127 13:34:52.582795  426243 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:34:52.582903  426243 kubeadm.go:310] [api-check] The API server is healthy after 5.501149453s
	I0127 13:34:52.583076  426243 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:34:52.583258  426243 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:34:52.583323  426243 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:34:52.583591  426243 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-174381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:34:52.583679  426243 kubeadm.go:310] [bootstrap-token] Using token: 5hn0ox.etnk5twofkqgha4f
	I0127 13:34:52.584876  426243 out.go:235]   - Configuring RBAC rules ...
	I0127 13:34:52.585016  426243 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:34:52.585138  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:34:52.585329  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:34:52.585515  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:34:52.585645  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:34:52.585730  426243 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:34:52.585829  426243 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:34:52.585867  426243 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:34:52.585911  426243 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:34:52.585917  426243 kubeadm.go:310] 
	I0127 13:34:52.585967  426243 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:34:52.585973  426243 kubeadm.go:310] 
	I0127 13:34:52.586066  426243 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:34:52.586082  426243 kubeadm.go:310] 
	I0127 13:34:52.586138  426243 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:34:52.586214  426243 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:34:52.586295  426243 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:34:52.586319  426243 kubeadm.go:310] 
	I0127 13:34:52.586416  426243 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:34:52.586463  426243 kubeadm.go:310] 
	I0127 13:34:52.586522  426243 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:34:52.586532  426243 kubeadm.go:310] 
	I0127 13:34:52.586628  426243 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:34:52.586712  426243 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:34:52.586770  426243 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:34:52.586777  426243 kubeadm.go:310] 
	I0127 13:34:52.586857  426243 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:34:52.586926  426243 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:34:52.586932  426243 kubeadm.go:310] 
	I0127 13:34:52.587010  426243 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587095  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 13:34:52.587119  426243 kubeadm.go:310] 	--control-plane 
	I0127 13:34:52.587125  426243 kubeadm.go:310] 
	I0127 13:34:52.587196  426243 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:34:52.587204  426243 kubeadm.go:310] 
	I0127 13:34:52.587272  426243 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587400  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 13:34:52.587418  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:34:52.587432  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:52.588976  426243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:34:50.482735  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Start
	I0127 13:34:50.482923  429070 main.go:141] libmachine: (newest-cni-639843) starting domain...
	I0127 13:34:50.482942  429070 main.go:141] libmachine: (newest-cni-639843) ensuring networks are active...
	I0127 13:34:50.483967  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network default is active
	I0127 13:34:50.484412  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network mk-newest-cni-639843 is active
	I0127 13:34:50.484881  429070 main.go:141] libmachine: (newest-cni-639843) getting domain XML...
	I0127 13:34:50.485667  429070 main.go:141] libmachine: (newest-cni-639843) creating domain...
	I0127 13:34:51.790885  429070 main.go:141] libmachine: (newest-cni-639843) waiting for IP...
	I0127 13:34:51.792240  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:51.793056  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:51.793082  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:51.792897  429104 retry.go:31] will retry after 310.654811ms: waiting for domain to come up
	I0127 13:34:52.105667  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.106457  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.106639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.106581  429104 retry.go:31] will retry after 280.140783ms: waiting for domain to come up
	I0127 13:34:52.388057  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.388616  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.388639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.388575  429104 retry.go:31] will retry after 317.414736ms: waiting for domain to come up
	I0127 13:34:52.708208  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.708845  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.708880  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.708795  429104 retry.go:31] will retry after 475.980482ms: waiting for domain to come up
	I0127 13:34:53.186613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.187252  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.187320  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.187240  429104 retry.go:31] will retry after 619.306112ms: waiting for domain to come up
	I0127 13:34:53.807794  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.808436  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.808485  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.808365  429104 retry.go:31] will retry after 838.158661ms: waiting for domain to come up
	I0127 13:34:54.647849  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:54.648442  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:54.648465  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:54.648411  429104 retry.go:31] will retry after 739.028542ms: waiting for domain to come up
	I0127 13:34:51.475609  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:51.489500  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:51.489579  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:51.536219  427154 cri.go:89] found id: ""
	I0127 13:34:51.536250  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.536262  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:51.536270  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:51.536334  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:51.577494  427154 cri.go:89] found id: ""
	I0127 13:34:51.577522  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.577536  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:51.577543  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:51.577606  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:51.614430  427154 cri.go:89] found id: ""
	I0127 13:34:51.614463  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.614476  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:51.614484  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:51.614602  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:51.666530  427154 cri.go:89] found id: ""
	I0127 13:34:51.666582  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.666591  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:51.666597  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:51.666653  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:51.705538  427154 cri.go:89] found id: ""
	I0127 13:34:51.705567  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.705579  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:51.705587  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:51.705645  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:51.743604  427154 cri.go:89] found id: ""
	I0127 13:34:51.743638  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.743650  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:51.743658  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:51.743721  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:51.778029  427154 cri.go:89] found id: ""
	I0127 13:34:51.778058  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.778070  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:51.778078  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:51.778148  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:51.819260  427154 cri.go:89] found id: ""
	I0127 13:34:51.819294  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.819307  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:51.819321  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:51.819338  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:51.887511  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:51.887552  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:51.904227  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:51.904261  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:51.980655  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:51.980684  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:51.980699  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:52.085922  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:52.085973  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:54.642029  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:54.655922  427154 kubeadm.go:597] duration metric: took 4m4.240008337s to restartPrimaryControlPlane
	W0127 13:34:54.656192  427154 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:34:54.656244  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:34:52.590276  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:34:52.604204  426243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:34:52.631515  426243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:34:52.631609  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:52.631702  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-174381 minikube.k8s.io/updated_at=2025_01_27T13_34_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-174381 minikube.k8s.io/primary=true
	I0127 13:34:52.663541  426243 ops.go:34] apiserver oom_adj: -16
	I0127 13:34:52.870691  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.371756  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.871386  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.371644  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.871179  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.370747  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.871458  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.371676  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.870824  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.982232  426243 kubeadm.go:1113] duration metric: took 4.350694221s to wait for elevateKubeSystemPrivileges
	I0127 13:34:56.982281  426243 kubeadm.go:394] duration metric: took 6m1.699030467s to StartCluster
	I0127 13:34:56.982314  426243 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.982426  426243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:56.983746  426243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.984032  426243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:34:56.984111  426243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:34:56.984230  426243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-174381"
	I0127 13:34:56.984249  426243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-174381"
	W0127 13:34:56.984258  426243 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:34:56.984273  426243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-174381"
	I0127 13:34:56.984292  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984300  426243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-174381"
	I0127 13:34:56.984303  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:56.984359  426243 addons.go:69] Setting dashboard=true in profile "embed-certs-174381"
	I0127 13:34:56.984372  426243 addons.go:238] Setting addon dashboard=true in "embed-certs-174381"
	W0127 13:34:56.984381  426243 addons.go:247] addon dashboard should already be in state true
	I0127 13:34:56.984405  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984450  426243 addons.go:69] Setting metrics-server=true in profile "embed-certs-174381"
	I0127 13:34:56.984484  426243 addons.go:238] Setting addon metrics-server=true in "embed-certs-174381"
	W0127 13:34:56.984494  426243 addons.go:247] addon metrics-server should already be in state true
	I0127 13:34:56.984524  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984760  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984778  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984799  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984801  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984812  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984826  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984943  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984977  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.986354  426243 out.go:177] * Verifying Kubernetes components...
	I0127 13:34:56.988314  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:34:57.003008  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0127 13:34:57.003716  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.003737  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0127 13:34:57.004011  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0127 13:34:57.004163  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004169  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0127 13:34:57.004457  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004482  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.004559  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004638  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004651  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004670  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005012  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005085  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005111  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005198  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.005324  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005340  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005955  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005969  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005970  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.006577  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006617  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.006912  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006964  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.007601  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.007633  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.009217  426243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-174381"
	W0127 13:34:57.009239  426243 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:34:57.009268  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:57.009605  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.009648  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.027242  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0127 13:34:57.027495  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0127 13:34:57.027644  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.027844  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.028181  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028198  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028301  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028318  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028539  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028633  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.028808  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.029068  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0127 13:34:57.029543  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.030162  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.030190  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.030581  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.030601  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031166  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.031207  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.031430  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031637  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0127 13:34:57.031993  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.032625  426243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:34:57.032750  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.032765  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.033302  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.033477  426243 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:34:57.033498  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.033587  426243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.033607  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:34:57.033627  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.035541  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.035761  426243 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036794  426243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036804  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:34:57.036814  426243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:34:57.036833  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.037349  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.037808  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:34:57.037827  426243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:34:57.037856  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.038015  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.038042  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.038208  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.038375  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.038561  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.038701  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.041035  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041500  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.041519  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041915  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042008  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042189  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042254  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.042272  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042592  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042583  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.042727  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042852  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.055810  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0127 13:34:57.056237  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.056772  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.056801  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.057165  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.057501  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.059165  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.059398  426243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.059418  426243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:34:57.059437  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.062703  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063236  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.063266  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063369  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.063544  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.063694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.063831  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.242347  426243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:34:57.326178  426243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352801  426243 node_ready.go:49] node "embed-certs-174381" has status "Ready":"True"
	I0127 13:34:57.352828  426243 node_ready.go:38] duration metric: took 26.613856ms for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352841  426243 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:34:57.368293  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:57.372941  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:34:57.372962  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:34:57.391676  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.418587  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:34:57.418616  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:34:57.446588  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.460844  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:34:57.460869  426243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:34:57.507947  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:34:57.507976  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:34:57.542669  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.542701  426243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:34:57.630641  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:34:57.630672  426243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:34:57.639506  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.693463  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:34:57.693498  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:34:57.806045  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:34:57.806082  426243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:34:57.930058  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:34:57.930101  426243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:34:58.055263  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:34:58.055295  426243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:34:58.110576  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:34:58.110609  426243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:34:58.202270  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.202305  426243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:34:58.293311  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.514356  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067720868s)
	I0127 13:34:58.514435  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514450  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.514846  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.514876  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.514894  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.514909  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514920  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.515161  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.515197  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.515860  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516243  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124532885s)
	I0127 13:34:58.516270  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516281  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.516739  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.516757  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.516768  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516776  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516787  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.517207  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.517230  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549206  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.549228  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.549614  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.549638  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549648  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260116  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.620545789s)
	I0127 13:34:59.260244  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260271  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.260620  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260713  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.260730  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.260746  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260761  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.261011  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.261041  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.261061  426243 addons.go:479] Verifying addon metrics-server=true in "embed-certs-174381"
	I0127 13:34:59.395546  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.395572  426243 pod_ready.go:82] duration metric: took 2.027244475s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.395586  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407673  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.407695  426243 pod_ready.go:82] duration metric: took 12.102291ms for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407705  426243 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417168  426243 pod_ready.go:93] pod "etcd-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.417190  426243 pod_ready.go:82] duration metric: took 9.47928ms for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417199  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:00.168433  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.875044372s)
	I0127 13:35:00.168496  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.168520  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.168866  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.170590  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.170645  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.170666  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.170673  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.171042  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.171132  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.171105  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.172686  426243 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-174381 addons enable metrics-server
	
	I0127 13:35:00.174376  426243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:34:59.517968  427154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.861694115s)
	I0127 13:34:59.518062  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:34:59.536180  427154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:34:59.547986  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:34:59.561566  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:34:59.561591  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:34:59.561645  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:34:59.574802  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:34:59.574872  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:34:59.588185  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:34:59.598292  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:34:59.598356  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:34:59.608921  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.621764  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:34:59.621825  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.635526  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:34:59.646582  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:34:59.646644  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:34:59.657975  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:34:59.745239  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:34:59.745337  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:59.946676  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:59.946890  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:59.947050  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:35:00.183580  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:55.388471  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:55.388933  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:55.388964  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:55.388914  429104 retry.go:31] will retry after 1.346738272s: waiting for domain to come up
	I0127 13:34:56.737433  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:56.738024  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:56.738081  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:56.738007  429104 retry.go:31] will retry after 1.120347472s: waiting for domain to come up
	I0127 13:34:57.860265  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:57.860912  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:57.860943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:57.860882  429104 retry.go:31] will retry after 2.152534572s: waiting for domain to come up
	I0127 13:35:00.015953  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:00.016579  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:00.016613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:00.016544  429104 retry.go:31] will retry after 2.588698804s: waiting for domain to come up
	I0127 13:35:00.184950  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:35:00.185049  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:35:00.185140  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:35:00.185334  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:35:00.185435  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:35:00.186094  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:35:00.186301  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:35:00.187022  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:35:00.187455  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:35:00.187928  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:35:00.188334  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:35:00.188531  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:35:00.188608  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:35:00.344156  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:35:00.836083  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:35:00.964664  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:35:01.072929  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:35:01.092946  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:35:01.097538  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:35:01.097961  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:35:01.292953  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:35:00.175566  426243 addons.go:514] duration metric: took 3.191465201s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:35:01.424773  426243 pod_ready.go:103] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"False"
	I0127 13:35:01.924012  426243 pod_ready.go:93] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:01.924044  426243 pod_ready.go:82] duration metric: took 2.506836977s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:01.924057  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.607848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:02.608639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:02.608669  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:02.608620  429104 retry.go:31] will retry after 2.763044938s: waiting for domain to come up
	I0127 13:35:01.294375  427154 out.go:235]   - Booting up control plane ...
	I0127 13:35:01.294569  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:35:01.306014  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:35:01.309847  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:35:01.310062  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:35:01.312436  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:35:02.931062  426243 pod_ready.go:93] pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.931095  426243 pod_ready.go:82] duration metric: took 1.007026875s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.931108  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936917  426243 pod_ready.go:93] pod "kube-proxy-cjsf9" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.936945  426243 pod_ready.go:82] duration metric: took 5.828276ms for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936957  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444155  426243 pod_ready.go:93] pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:03.444192  426243 pod_ready.go:82] duration metric: took 507.225554ms for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444203  426243 pod_ready.go:39] duration metric: took 6.091349359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:35:03.444226  426243 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:03.444294  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:03.488162  426243 api_server.go:72] duration metric: took 6.504085901s to wait for apiserver process to appear ...
	I0127 13:35:03.488197  426243 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:03.488224  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:35:03.493586  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 13:35:03.494867  426243 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:03.494894  426243 api_server.go:131] duration metric: took 6.689991ms to wait for apiserver health ...
	I0127 13:35:03.494903  426243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:03.575835  426243 system_pods.go:59] 9 kube-system pods found
	I0127 13:35:03.575871  426243 system_pods.go:61] "coredns-668d6bf9bc-9ncnm" [8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7] Running
	I0127 13:35:03.575877  426243 system_pods.go:61] "coredns-668d6bf9bc-hjncm" [68641e50-9f99-4811-9752-c7dc0db47502] Running
	I0127 13:35:03.575881  426243 system_pods.go:61] "etcd-embed-certs-174381" [fc5cb0ba-724d-4b3d-a6d0-65644ed57d99] Running
	I0127 13:35:03.575886  426243 system_pods.go:61] "kube-apiserver-embed-certs-174381" [7afdc2d3-86bd-480d-a081-e1475ff21346] Running
	I0127 13:35:03.575890  426243 system_pods.go:61] "kube-controller-manager-embed-certs-174381" [fa410171-2b30-4c79-97d4-87c1549fd75c] Running
	I0127 13:35:03.575894  426243 system_pods.go:61] "kube-proxy-cjsf9" [c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6] Running
	I0127 13:35:03.575901  426243 system_pods.go:61] "kube-scheduler-embed-certs-174381" [ab92b381-fb78-4aa1-bc55-4e47a58f2c32] Running
	I0127 13:35:03.575908  426243 system_pods.go:61] "metrics-server-f79f97bbb-hxlwf" [cb779c78-85f9-48e7-88c3-f087f57547e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:03.575913  426243 system_pods.go:61] "storage-provisioner" [3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1] Running
	I0127 13:35:03.575922  426243 system_pods.go:74] duration metric: took 81.012821ms to wait for pod list to return data ...
	I0127 13:35:03.575931  426243 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:03.772597  426243 default_sa.go:45] found service account: "default"
	I0127 13:35:03.772641  426243 default_sa.go:55] duration metric: took 196.700969ms for default service account to be created ...
	I0127 13:35:03.772655  426243 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:35:03.976966  426243 system_pods.go:87] 9 kube-system pods found
	I0127 13:35:05.375624  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:05.376167  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:05.376199  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:05.376124  429104 retry.go:31] will retry after 2.824398155s: waiting for domain to come up
	I0127 13:35:08.203385  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:08.203848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:08.203881  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:08.203823  429104 retry.go:31] will retry after 4.529537578s: waiting for domain to come up
	I0127 13:35:12.735786  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736343  429070 main.go:141] libmachine: (newest-cni-639843) found domain IP: 192.168.50.22
	I0127 13:35:12.736364  429070 main.go:141] libmachine: (newest-cni-639843) reserving static IP address...
	I0127 13:35:12.736378  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has current primary IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736707  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.736748  429070 main.go:141] libmachine: (newest-cni-639843) reserved static IP address 192.168.50.22 for domain newest-cni-639843
	I0127 13:35:12.736770  429070 main.go:141] libmachine: (newest-cni-639843) DBG | skip adding static IP to network mk-newest-cni-639843 - found existing host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"}
	I0127 13:35:12.736785  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Getting to WaitForSSH function...
	I0127 13:35:12.736810  429070 main.go:141] libmachine: (newest-cni-639843) waiting for SSH...
	I0127 13:35:12.739230  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739563  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.739592  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739721  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH client type: external
	I0127 13:35:12.739746  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa (-rw-------)
	I0127 13:35:12.739781  429070 main.go:141] libmachine: (newest-cni-639843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:35:12.739791  429070 main.go:141] libmachine: (newest-cni-639843) DBG | About to run SSH command:
	I0127 13:35:12.739800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | exit 0
	I0127 13:35:12.866664  429070 main.go:141] libmachine: (newest-cni-639843) DBG | SSH cmd err, output: <nil>: 
	I0127 13:35:12.867059  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetConfigRaw
	I0127 13:35:12.867776  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:12.870461  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.870943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.870979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.871221  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:35:12.871401  429070 machine.go:93] provisionDockerMachine start ...
	I0127 13:35:12.871421  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:12.871618  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.873979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874373  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.874411  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874581  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.874746  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.874903  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.875063  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.875221  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.875426  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.875440  429070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:35:12.979102  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:35:12.979140  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979406  429070 buildroot.go:166] provisioning hostname "newest-cni-639843"
	I0127 13:35:12.979435  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.982631  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983000  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.983025  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983170  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.983324  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983447  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983605  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.983809  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.984033  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.984051  429070 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-639843 && echo "newest-cni-639843" | sudo tee /etc/hostname
	I0127 13:35:13.107964  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-639843
	
	I0127 13:35:13.108004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.111168  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111589  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.111617  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111790  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.111995  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112158  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112289  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.112481  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.112709  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.112733  429070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-639843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-639843/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-639843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:35:13.226643  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:35:13.226683  429070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:35:13.226734  429070 buildroot.go:174] setting up certificates
	I0127 13:35:13.226749  429070 provision.go:84] configureAuth start
	I0127 13:35:13.226767  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:13.227060  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:13.230284  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230719  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.230752  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.233444  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233798  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.233832  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233972  429070 provision.go:143] copyHostCerts
	I0127 13:35:13.234039  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:35:13.234053  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:35:13.234146  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:35:13.234301  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:35:13.234313  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:35:13.234354  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:35:13.234450  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:35:13.234462  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:35:13.234497  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:35:13.234598  429070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.newest-cni-639843 san=[127.0.0.1 192.168.50.22 localhost minikube newest-cni-639843]
	I0127 13:35:13.505038  429070 provision.go:177] copyRemoteCerts
	I0127 13:35:13.505119  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:35:13.505154  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.508162  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508530  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.508555  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508759  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.508944  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.509117  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.509267  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:13.595888  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:35:13.621151  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:35:13.647473  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:35:13.673605  429070 provision.go:87] duration metric: took 446.83901ms to configureAuth
	I0127 13:35:13.673655  429070 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:35:13.673889  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:13.674004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.676982  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677392  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.677421  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677573  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.677762  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.677972  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.678123  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.678273  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.678496  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.678527  429070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:35:13.921465  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:35:13.921494  429070 machine.go:96] duration metric: took 1.050079095s to provisionDockerMachine
	I0127 13:35:13.921510  429070 start.go:293] postStartSetup for "newest-cni-639843" (driver="kvm2")
	I0127 13:35:13.921522  429070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:35:13.921543  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:13.921954  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:35:13.922025  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.925574  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.925941  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.926012  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.926266  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.926493  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.926675  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.926888  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.014753  429070 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:35:14.019344  429070 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:35:14.019374  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:35:14.019439  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:35:14.019540  429070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:35:14.019659  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:35:14.031277  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:14.060121  429070 start.go:296] duration metric: took 138.59357ms for postStartSetup
	I0127 13:35:14.060165  429070 fix.go:56] duration metric: took 23.600678344s for fixHost
	I0127 13:35:14.060188  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.063145  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063514  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.063542  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063761  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.063980  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064176  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064340  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.064541  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:14.064724  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:14.064738  429070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:35:14.172785  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984914.150810987
	
	I0127 13:35:14.172823  429070 fix.go:216] guest clock: 1737984914.150810987
	I0127 13:35:14.172832  429070 fix.go:229] Guest: 2025-01-27 13:35:14.150810987 +0000 UTC Remote: 2025-01-27 13:35:14.060169498 +0000 UTC m=+23.763612053 (delta=90.641489ms)
	I0127 13:35:14.172889  429070 fix.go:200] guest clock delta is within tolerance: 90.641489ms
	I0127 13:35:14.172905  429070 start.go:83] releasing machines lock for "newest-cni-639843", held for 23.713435883s
	I0127 13:35:14.172938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.173202  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:14.176163  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176559  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.176600  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177182  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177351  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177450  429070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:35:14.177498  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.177596  429070 ssh_runner.go:195] Run: cat /version.json
	I0127 13:35:14.177625  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.180456  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180561  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180838  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180910  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.180914  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180944  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.181150  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.181189  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181344  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181357  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181546  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181536  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.181739  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.283980  429070 ssh_runner.go:195] Run: systemctl --version
	I0127 13:35:14.290329  429070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:35:14.450608  429070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:35:14.461512  429070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:35:14.461597  429070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:35:14.482924  429070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:35:14.482951  429070 start.go:495] detecting cgroup driver to use...
	I0127 13:35:14.483022  429070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:35:14.503452  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:35:14.517592  429070 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:35:14.517659  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:35:14.532792  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:35:14.547306  429070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:35:14.671116  429070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:35:14.818034  429070 docker.go:233] disabling docker service ...
	I0127 13:35:14.818133  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:35:14.832550  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:35:14.845137  429070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:35:14.986833  429070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:35:15.122943  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:35:15.137706  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:35:15.157591  429070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:35:15.157669  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.168185  429070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:35:15.168268  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.178876  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.188792  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.198951  429070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:35:15.209169  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.219549  429070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.238633  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.249729  429070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:35:15.259178  429070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:35:15.259244  429070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:35:15.272097  429070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:35:15.281611  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:15.403472  429070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:35:15.498842  429070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:35:15.498928  429070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:35:15.505405  429070 start.go:563] Will wait 60s for crictl version
	I0127 13:35:15.505478  429070 ssh_runner.go:195] Run: which crictl
	I0127 13:35:15.509869  429070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:35:15.580026  429070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:35:15.580122  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.609376  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.643173  429070 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:35:15.644483  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:15.647483  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.647905  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:15.647930  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.648148  429070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:35:15.652911  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:15.668696  429070 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:35:15.670127  429070 kubeadm.go:883] updating cluster {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:35:15.670264  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:35:15.670328  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:15.716362  429070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:35:15.716455  429070 ssh_runner.go:195] Run: which lz4
	I0127 13:35:15.721254  429070 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:35:15.727443  429070 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:35:15.727478  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 13:35:17.208454  429070 crio.go:462] duration metric: took 1.487249966s to copy over tarball
	I0127 13:35:17.208542  429070 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:35:19.421239  429070 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.212662568s)
	I0127 13:35:19.421271  429070 crio.go:469] duration metric: took 2.21278342s to extract the tarball
	I0127 13:35:19.421281  429070 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:35:19.461756  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:19.504974  429070 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:35:19.505005  429070 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:35:19.505015  429070 kubeadm.go:934] updating node { 192.168.50.22 8443 v1.32.1 crio true true} ...
	I0127 13:35:19.505173  429070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-639843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:35:19.505269  429070 ssh_runner.go:195] Run: crio config
	I0127 13:35:19.556732  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:19.556754  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:19.556766  429070 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:35:19.556791  429070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.22 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639843 NodeName:newest-cni-639843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:35:19.556951  429070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:35:19.557032  429070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:35:19.567405  429070 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:35:19.567483  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:35:19.577572  429070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 13:35:19.595555  429070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:35:19.612336  429070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 13:35:19.630199  429070 ssh_runner.go:195] Run: grep 192.168.50.22	control-plane.minikube.internal$ /etc/hosts
	I0127 13:35:19.634268  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:19.646912  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:19.764087  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:19.783083  429070 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843 for IP: 192.168.50.22
	I0127 13:35:19.783115  429070 certs.go:194] generating shared ca certs ...
	I0127 13:35:19.783139  429070 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:19.783330  429070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:35:19.783386  429070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:35:19.783400  429070 certs.go:256] generating profile certs ...
	I0127 13:35:19.783534  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/client.key
	I0127 13:35:19.783619  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key.505bfb94
	I0127 13:35:19.783671  429070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key
	I0127 13:35:19.783826  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:35:19.783866  429070 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:35:19.783880  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:35:19.783913  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:35:19.783939  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:35:19.783961  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:35:19.784010  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:19.784667  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:35:19.821550  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:35:19.860184  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:35:19.893311  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:35:19.926181  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:35:19.954565  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:35:19.997938  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:35:20.022058  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:35:20.045748  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:35:20.069279  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:35:20.092959  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:35:20.117180  429070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:35:20.135202  429070 ssh_runner.go:195] Run: openssl version
	I0127 13:35:20.141197  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:35:20.152160  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156810  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156871  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.162645  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:35:20.174920  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:35:20.187426  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192129  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192174  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.198019  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:35:20.210195  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:35:20.220934  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225588  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225622  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.231516  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:35:20.243779  429070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:35:20.248511  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:35:20.254523  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:35:20.260441  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:35:20.266429  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:35:20.272290  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:35:20.278051  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:35:20.284024  429070 kubeadm.go:392] StartCluster: {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:35:20.284105  429070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:35:20.284164  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.332523  429070 cri.go:89] found id: ""
	I0127 13:35:20.332587  429070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:35:20.344932  429070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:35:20.344959  429070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:35:20.345011  429070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:35:20.355729  429070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:35:20.356795  429070 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-639843" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:20.357505  429070 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-639843" cluster setting kubeconfig missing "newest-cni-639843" context setting]
	I0127 13:35:20.358374  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:20.360037  429070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:35:20.371572  429070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.22
	I0127 13:35:20.371606  429070 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:35:20.371622  429070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:35:20.371679  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.418797  429070 cri.go:89] found id: ""
	I0127 13:35:20.418873  429070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:35:20.437304  429070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:35:20.447636  429070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:35:20.447660  429070 kubeadm.go:157] found existing configuration files:
	
	I0127 13:35:20.447704  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:35:20.458280  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:35:20.458335  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:35:20.469304  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:35:20.478639  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:35:20.478689  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:35:20.488624  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.497867  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:35:20.497908  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.507379  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:35:20.516362  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:35:20.516416  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:35:20.525787  429070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:35:20.542646  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:20.671597  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.498726  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.899789  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.965210  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:22.062165  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:22.062252  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:22.563318  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.063066  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.082649  429070 api_server.go:72] duration metric: took 1.020482627s to wait for apiserver process to appear ...
	I0127 13:35:23.082686  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:23.082711  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:23.083244  429070 api_server.go:269] stopped: https://192.168.50.22:8443/healthz: Get "https://192.168.50.22:8443/healthz": dial tcp 192.168.50.22:8443: connect: connection refused
	I0127 13:35:23.583699  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.503776  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:35:25.503807  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:35:25.503825  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.547403  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.547434  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:25.583659  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.589328  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.589357  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.082833  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.087881  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.087908  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.583159  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.592115  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.592148  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:27.083703  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.090407  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.098905  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.098928  429070 api_server.go:131] duration metric: took 4.01623437s to wait for apiserver health ...
	I0127 13:35:27.098938  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:27.098944  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:27.100651  429070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:35:27.101855  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:35:27.116286  429070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:35:27.139348  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.158680  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.158717  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.158730  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.158741  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.158748  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.158757  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:35:27.158766  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.158776  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.158785  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:35:27.158819  429070 system_pods.go:74] duration metric: took 19.446392ms to wait for pod list to return data ...
	I0127 13:35:27.158832  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.168338  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.168376  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.168392  429070 node_conditions.go:105] duration metric: took 9.550643ms to run NodePressure ...
	I0127 13:35:27.168416  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:27.459759  429070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:35:27.473184  429070 ops.go:34] apiserver oom_adj: -16
	I0127 13:35:27.473212  429070 kubeadm.go:597] duration metric: took 7.128244476s to restartPrimaryControlPlane
	I0127 13:35:27.473226  429070 kubeadm.go:394] duration metric: took 7.18920723s to StartCluster
	I0127 13:35:27.473251  429070 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.473341  429070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:27.475111  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.475373  429070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:35:27.475451  429070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:35:27.475562  429070 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-639843"
	I0127 13:35:27.475584  429070 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-639843"
	W0127 13:35:27.475598  429070 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:35:27.475598  429070 addons.go:69] Setting dashboard=true in profile "newest-cni-639843"
	I0127 13:35:27.475600  429070 addons.go:69] Setting metrics-server=true in profile "newest-cni-639843"
	I0127 13:35:27.475621  429070 addons.go:238] Setting addon dashboard=true in "newest-cni-639843"
	I0127 13:35:27.475629  429070 addons.go:238] Setting addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:27.475639  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	W0127 13:35:27.475643  429070 addons.go:247] addon metrics-server should already be in state true
	I0127 13:35:27.475676  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.475582  429070 addons.go:69] Setting default-storageclass=true in profile "newest-cni-639843"
	I0127 13:35:27.475611  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:27.475708  429070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-639843"
	W0127 13:35:27.475630  429070 addons.go:247] addon dashboard should already be in state true
	I0127 13:35:27.475812  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.476070  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476077  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476115  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476134  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476159  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476168  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476195  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476204  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.477011  429070 out.go:177] * Verifying Kubernetes components...
	I0127 13:35:27.478509  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:27.493703  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0127 13:35:27.493801  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I0127 13:35:27.493955  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0127 13:35:27.494221  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494259  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494795  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494819  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.494840  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494932  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494956  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495188  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495296  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495464  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.495481  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495764  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495798  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495812  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495819  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495871  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.496119  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0127 13:35:27.496433  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.496529  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.496572  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.496893  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.496916  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.497264  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.497502  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.502029  429070 addons.go:238] Setting addon default-storageclass=true in "newest-cni-639843"
	W0127 13:35:27.502051  429070 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:35:27.502080  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.502830  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.502873  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.512816  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I0127 13:35:27.513096  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0127 13:35:27.513275  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I0127 13:35:27.535151  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535226  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535266  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535748  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535766  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.535769  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535791  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.536087  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536347  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536392  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.536559  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.537321  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.537343  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.537676  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.537946  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.538406  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539127  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539700  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.540468  429070 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:35:27.540479  429070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:35:27.541259  429070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:35:27.542133  429070 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.542154  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:35:27.542174  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.542782  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:35:27.542801  429070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:35:27.542820  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.543610  429070 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:35:27.544743  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:35:27.544762  429070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:35:27.544780  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.545935  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546330  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.546364  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546495  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.546708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.546872  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.547017  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.547822  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548084  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548244  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548291  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548448  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548585  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548619  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548786  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548800  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.548938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548980  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.549036  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.549180  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.554799  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0127 13:35:27.555253  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.555780  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.555800  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.556187  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.556616  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.556646  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.574277  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0127 13:35:27.574815  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.575396  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.575420  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.575741  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.575966  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.577346  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.577556  429070 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.577574  429070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:35:27.577594  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.580061  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580408  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.580432  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580659  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.580836  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.580987  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.581148  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.713210  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:27.737971  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:27.738049  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:27.755609  429070 api_server.go:72] duration metric: took 280.198045ms to wait for apiserver process to appear ...
	I0127 13:35:27.755639  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:27.755660  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.765216  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.767614  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.767639  429070 api_server.go:131] duration metric: took 11.991322ms to wait for apiserver health ...
	I0127 13:35:27.767650  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.781696  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.781778  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.781799  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.781815  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.781827  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.781836  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running
	I0127 13:35:27.781862  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.781874  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.781884  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running
	I0127 13:35:27.781895  429070 system_pods.go:74] duration metric: took 14.236485ms to wait for pod list to return data ...
	I0127 13:35:27.781908  429070 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:27.787854  429070 default_sa.go:45] found service account: "default"
	I0127 13:35:27.787884  429070 default_sa.go:55] duration metric: took 5.965578ms for default service account to be created ...
	I0127 13:35:27.787899  429070 kubeadm.go:582] duration metric: took 312.493014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:35:27.787924  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.793927  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.793949  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.793961  429070 node_conditions.go:105] duration metric: took 6.028431ms to run NodePressure ...
	I0127 13:35:27.793975  429070 start.go:241] waiting for startup goroutines ...
	I0127 13:35:27.806081  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.851437  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.912936  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:35:27.912967  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:35:27.941546  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:35:27.941579  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:35:28.017628  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:35:28.017663  429070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:35:28.027973  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:35:28.028016  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:35:28.097111  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.097146  429070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:35:28.148404  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:35:28.148439  429070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:35:28.272234  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.273446  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:35:28.273473  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:35:28.324863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:35:28.324897  429070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:35:28.400474  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:35:28.400504  429070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:35:28.460550  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:35:28.460583  429070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:35:28.508999  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:35:28.509031  429070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:35:28.555538  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555570  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.555889  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.555906  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.555915  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555923  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.556151  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.556180  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.556196  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.564252  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.564277  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.564553  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.564574  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.564893  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.605863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:28.605896  429070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:35:28.650259  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:29.517093  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.66560932s)
	I0127 13:35:29.517160  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517173  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517607  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:29.517645  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517655  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.517664  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517672  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517974  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517996  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741184  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46890411s)
	I0127 13:35:29.741241  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741252  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.741558  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.741576  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741586  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741609  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.742656  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.742680  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.742692  429070 addons.go:479] Verifying addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:29.742659  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:30.069134  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.418812542s)
	I0127 13:35:30.069214  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069233  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069539  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069559  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.069568  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069575  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069840  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069856  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.071209  429070 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-639843 addons enable metrics-server
	
	I0127 13:35:30.072569  429070 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:35:30.073970  429070 addons.go:514] duration metric: took 2.598533083s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:35:30.074007  429070 start.go:246] waiting for cluster config update ...
	I0127 13:35:30.074019  429070 start.go:255] writing updated cluster config ...
	I0127 13:35:30.074258  429070 ssh_runner.go:195] Run: rm -f paused
	I0127 13:35:30.125745  429070 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:35:30.127324  429070 out.go:177] * Done! kubectl is now configured to use "newest-cni-639843" cluster and "default" namespace by default
	I0127 13:35:41.313958  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:35:41.315406  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:41.315596  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:46.316260  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:46.316520  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:56.316974  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:56.317208  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:16.318338  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:16.318524  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320677  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:56.320945  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320963  427154 kubeadm.go:310] 
	I0127 13:36:56.321020  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:36:56.321085  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:36:56.321099  427154 kubeadm.go:310] 
	I0127 13:36:56.321165  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:36:56.321228  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:36:56.321357  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:36:56.321378  427154 kubeadm.go:310] 
	I0127 13:36:56.321499  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:36:56.321545  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:36:56.321574  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:36:56.321580  427154 kubeadm.go:310] 
	I0127 13:36:56.321720  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:36:56.321827  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:36:56.321840  427154 kubeadm.go:310] 
	I0127 13:36:56.321935  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:36:56.322018  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:36:56.322099  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:36:56.322162  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:36:56.322169  427154 kubeadm.go:310] 
	I0127 13:36:56.323303  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:36:56.323399  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:36:56.323478  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 13:36:56.323617  427154 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:36:56.323664  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:36:56.804696  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:36:56.819996  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:36:56.830103  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:36:56.830120  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:36:56.830161  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:36:56.839297  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:36:56.839351  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:36:56.848603  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:36:56.857433  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:36:56.857500  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:36:56.867735  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.876669  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:36:56.876721  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.885857  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:36:56.894734  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:36:56.894788  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:36:56.904112  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:36:56.975515  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:36:56.975724  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:36:57.110596  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:36:57.110748  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:36:57.110890  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:36:57.287182  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:36:57.289124  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:36:57.289247  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:36:57.289310  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:36:57.289405  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:36:57.289504  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:36:57.289595  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:36:57.289665  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:36:57.289780  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:36:57.290345  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:36:57.291337  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:36:57.292274  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:36:57.292554  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:36:57.292622  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:36:57.586245  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:36:57.746278  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:36:57.846816  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:36:57.985775  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:36:58.007369  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:36:58.008417  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:36:58.008485  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:36:58.134182  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:36:58.136066  427154 out.go:235]   - Booting up control plane ...
	I0127 13:36:58.136194  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:36:58.148785  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:36:58.148921  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:36:58.149274  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:36:58.153395  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:37:38.155987  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:37:38.156613  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:38.156831  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:43.157356  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:43.157567  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:53.158341  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:53.158675  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:13.158624  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:13.158876  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157583  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:53.157824  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157839  427154 kubeadm.go:310] 
	I0127 13:38:53.157896  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:38:53.157954  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:38:53.157966  427154 kubeadm.go:310] 
	I0127 13:38:53.158014  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:38:53.158064  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:38:53.158222  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:38:53.158234  427154 kubeadm.go:310] 
	I0127 13:38:53.158404  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:38:53.158453  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:38:53.158483  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:38:53.158491  427154 kubeadm.go:310] 
	I0127 13:38:53.158624  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:38:53.158726  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:38:53.158741  427154 kubeadm.go:310] 
	I0127 13:38:53.158894  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:38:53.159040  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:38:53.159165  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:38:53.159264  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:38:53.159275  427154 kubeadm.go:310] 
	I0127 13:38:53.159902  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:38:53.160042  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:38:53.160128  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:38:53.160213  427154 kubeadm.go:394] duration metric: took 8m2.798471593s to StartCluster
	I0127 13:38:53.160286  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:38:53.160377  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:38:53.205471  427154 cri.go:89] found id: ""
	I0127 13:38:53.205496  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.205504  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:38:53.205510  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:38:53.205577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:38:53.240500  427154 cri.go:89] found id: ""
	I0127 13:38:53.240532  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.240543  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:38:53.240564  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:38:53.240625  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:38:53.282232  427154 cri.go:89] found id: ""
	I0127 13:38:53.282267  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.282279  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:38:53.282287  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:38:53.282354  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:38:53.315589  427154 cri.go:89] found id: ""
	I0127 13:38:53.315643  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.315659  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:38:53.315666  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:38:53.315735  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:38:53.349806  427154 cri.go:89] found id: ""
	I0127 13:38:53.349836  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.349844  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:38:53.349850  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:38:53.349906  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:38:53.382052  427154 cri.go:89] found id: ""
	I0127 13:38:53.382084  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.382095  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:38:53.382103  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:38:53.382176  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:38:53.416057  427154 cri.go:89] found id: ""
	I0127 13:38:53.416091  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.416103  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:38:53.416120  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:38:53.416185  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:38:53.449983  427154 cri.go:89] found id: ""
	I0127 13:38:53.450017  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.450029  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:38:53.450046  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:38:53.450064  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:38:53.498208  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:38:53.498242  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:38:53.552441  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:38:53.552472  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:38:53.567811  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:38:53.567841  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:38:53.646625  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:38:53.646651  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:38:53.646667  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 13:38:53.748675  427154 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:38:53.748747  427154 out.go:270] * 
	W0127 13:38:53.748849  427154 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.748865  427154 out.go:270] * 
	W0127 13:38:53.749670  427154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:38:53.753264  427154 out.go:201] 
	W0127 13:38:53.754315  427154 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.754372  427154 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:38:53.754397  427154 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:38:53.755624  427154 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.288116631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985676288093145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acf5fe8c-ec1f-4d56-9256-20b7a7e0daab name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.288628541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c93f16b0-be6f-443c-8a39-6b497571a43d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.288701732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c93f16b0-be6f-443c-8a39-6b497571a43d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.288743207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c93f16b0-be6f-443c-8a39-6b497571a43d name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.320039886Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe727338-a8ca-4259-a6c1-7810c09894b2 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.320134517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe727338-a8ca-4259-a6c1-7810c09894b2 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.321442939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54ee1cc9-16f7-40dc-abec-40f5e76b2133 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.321879376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985676321854382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54ee1cc9-16f7-40dc-abec-40f5e76b2133 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.322570535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58f09848-1a5b-46c9-8593-ee066edd2bc4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.322618173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58f09848-1a5b-46c9-8593-ee066edd2bc4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.322651930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=58f09848-1a5b-46c9-8593-ee066edd2bc4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.355248285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3fff99b-887e-457d-baf0-3e0fb0caea61 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.355356692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3fff99b-887e-457d-baf0-3e0fb0caea61 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.356688918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45bb17ac-70b5-4e49-8471-712e70c75051 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.357170351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985676357148333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45bb17ac-70b5-4e49-8471-712e70c75051 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.357884320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0be06a2-0efe-4f17-bae9-05dabe07a14a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.357949788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0be06a2-0efe-4f17-bae9-05dabe07a14a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.357982440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c0be06a2-0efe-4f17-bae9-05dabe07a14a name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.390897227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22c67dde-5a2b-4923-bb35-e0680c123962 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.390985153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22c67dde-5a2b-4923-bb35-e0680c123962 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.392048989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff707080-096d-4a36-889b-d3a892631358 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.392444634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737985676392419689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff707080-096d-4a36-889b-d3a892631358 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.393096835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=119ce0b2-1f53-425c-aef4-d689b39728ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.393171422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=119ce0b2-1f53-425c-aef4-d689b39728ca name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:47:56 old-k8s-version-838260 crio[636]: time="2025-01-27 13:47:56.393203972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=119ce0b2-1f53-425c-aef4-d689b39728ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 13:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055193] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042878] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.106143] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.007175] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.630545] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653404] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.059408] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056860] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.174451] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.138641] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.249166] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +7.716430] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.059428] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.921451] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[Jan27 13:31] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 13:35] systemd-fstab-generator[5097]: Ignoring "noauto" option for root device
	[Jan27 13:36] systemd-fstab-generator[5380]: Ignoring "noauto" option for root device
	[  +0.055705] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:47:56 up 17 min,  0 users,  load average: 0.00, 0.05, 0.06
	Linux old-k8s-version-838260 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008ce8c0)
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: goroutine 112 [syscall]:
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: syscall.Syscall6(0xe8, 0xd, 0xc000ad9b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000ad9b6c, 0x7, 0x7, 0xffffffffffffffff, 0x1, 0xc0005cfab0, 0x0)
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000cd0420, 0x0, 0x0, 0x0)
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000ca09b0)
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jan 27 13:47:54 old-k8s-version-838260 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 13:47:54 old-k8s-version-838260 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 13:47:54 old-k8s-version-838260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jan 27 13:47:54 old-k8s-version-838260 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 13:47:54 old-k8s-version-838260 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6564]: I0127 13:47:54.931534    6564 server.go:416] Version: v1.20.0
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6564]: I0127 13:47:54.931856    6564 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6564]: I0127 13:47:54.933847    6564 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6564]: W0127 13:47:54.934902    6564 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 13:47:54 old-k8s-version-838260 kubelet[6564]: I0127 13:47:54.934954    6564 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (228.352902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-838260" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (367.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:48:30.266952  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:48:34.952662  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:49:11.102699  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:49:19.966522  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:49:31.287200  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:49:39.548108  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:51:08.031130  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:51:24.791051  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:51:41.051516  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:52:14.403858  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:52:39.248654  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:52:42.619193  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:52:47.854416  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:53:30.267977  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
E0127 13:53:34.952633  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.159:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.159:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (233.789364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-838260" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-838260 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-838260 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.777µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-838260 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (232.500644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-838260 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-563155                  | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC | 27 Jan 25 13:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-563155                                   | no-preload-563155            | jenkins | v1.35.0 | 27 Jan 25 13:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-441438       | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-174381                 | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC | 27 Jan 25 13:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-174381                                  | embed-certs-174381           | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-838260        | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:28 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-838260             | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC | 27 Jan 25 13:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-838260                              | old-k8s-version-838260       | jenkins | v1.35.0 | 27 Jan 25 13:30 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-441438                           | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-441438 | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:33 UTC |
	|         | default-k8s-diff-port-441438                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:33 UTC | 27 Jan 25 13:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-639843             | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-639843                  | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-639843 --memory=2200 --alsologtostderr   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:34 UTC | 27 Jan 25 13:35 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-639843 image list                           | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	| delete  | -p newest-cni-639843                                   | newest-cni-639843            | jenkins | v1.35.0 | 27 Jan 25 13:35 UTC | 27 Jan 25 13:35 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 13:34:50
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 13:34:50.343590  429070 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:34:50.343706  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343717  429070 out.go:358] Setting ErrFile to fd 2...
	I0127 13:34:50.343725  429070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:34:50.343905  429070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:34:50.344540  429070 out.go:352] Setting JSON to false
	I0127 13:34:50.345553  429070 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":22630,"bootTime":1737962260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:34:50.345705  429070 start.go:139] virtualization: kvm guest
	I0127 13:34:50.348432  429070 out.go:177] * [newest-cni-639843] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:34:50.349607  429070 notify.go:220] Checking for updates...
	I0127 13:34:50.349639  429070 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:34:50.350877  429070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:34:50.352137  429070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:50.353523  429070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:34:50.354936  429070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:34:50.356253  429070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:34:50.358120  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:50.358577  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.358648  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.375344  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
	I0127 13:34:50.375770  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.376385  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.376429  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.376809  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.377061  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.377398  429070 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:34:50.377833  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.377889  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.393490  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0127 13:34:50.393954  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.394574  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.394602  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.394931  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.395175  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.432045  429070 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 13:34:50.433260  429070 start.go:297] selected driver: kvm2
	I0127 13:34:50.433295  429070 start.go:901] validating driver "kvm2" against &{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.433450  429070 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:34:50.434521  429070 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.434662  429070 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 13:34:50.455080  429070 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 13:34:50.455695  429070 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:34:50.455755  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:34:50.455835  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:50.455908  429070 start.go:340] cluster config:
	{Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:34:50.456092  429070 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 13:34:50.457706  429070 out.go:177] * Starting "newest-cni-639843" primary control-plane node in "newest-cni-639843" cluster
	I0127 13:34:50.458857  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:34:50.458907  429070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 13:34:50.458924  429070 cache.go:56] Caching tarball of preloaded images
	I0127 13:34:50.459033  429070 preload.go:172] Found /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0127 13:34:50.459049  429070 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0127 13:34:50.459193  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:34:50.459403  429070 start.go:360] acquireMachinesLock for newest-cni-639843: {Name:mke52695106e01a7135aca0aab1959f5458c23f4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 13:34:50.459457  429070 start.go:364] duration metric: took 33.893µs to acquireMachinesLock for "newest-cni-639843"
	I0127 13:34:50.459478  429070 start.go:96] Skipping create...Using existing machine configuration
	I0127 13:34:50.459488  429070 fix.go:54] fixHost starting: 
	I0127 13:34:50.459761  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:50.459807  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:50.475245  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0127 13:34:50.475743  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:50.476455  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:34:50.476504  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:50.476932  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:50.477227  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:34:50.477420  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:34:50.479725  429070 fix.go:112] recreateIfNeeded on newest-cni-639843: state=Stopped err=<nil>
	I0127 13:34:50.479768  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	W0127 13:34:50.479933  429070 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 13:34:50.481457  429070 out.go:177] * Restarting existing kvm2 VM for "newest-cni-639843" ...
	I0127 13:34:48.302747  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:48.321834  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:48.321899  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:48.370678  427154 cri.go:89] found id: ""
	I0127 13:34:48.370716  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.370732  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:48.370741  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:48.370813  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:48.430514  427154 cri.go:89] found id: ""
	I0127 13:34:48.430655  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.430683  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:48.430702  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:48.430826  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:48.477908  427154 cri.go:89] found id: ""
	I0127 13:34:48.477941  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.477954  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:48.477962  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:48.478036  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:48.532193  427154 cri.go:89] found id: ""
	I0127 13:34:48.532230  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.532242  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:48.532250  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:48.532316  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:48.580627  427154 cri.go:89] found id: ""
	I0127 13:34:48.580658  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.580667  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:48.580673  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:48.580744  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:48.620393  427154 cri.go:89] found id: ""
	I0127 13:34:48.620428  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.620441  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:48.620449  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:48.620518  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:48.662032  427154 cri.go:89] found id: ""
	I0127 13:34:48.662071  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.662079  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:48.662097  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:48.662164  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:48.699662  427154 cri.go:89] found id: ""
	I0127 13:34:48.699697  427154 logs.go:282] 0 containers: []
	W0127 13:34:48.699709  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:48.699723  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:48.699745  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:48.752100  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:48.752134  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:48.768121  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:48.768167  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:48.838690  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:48.838718  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:48.838734  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:48.928433  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:48.928471  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:52.576263  426243 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 13:34:52.576356  426243 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:52.576423  426243 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:52.576582  426243 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:52.576704  426243 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 13:34:52.576783  426243 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:52.578299  426243 out.go:235]   - Generating certificates and keys ...
	I0127 13:34:52.578380  426243 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:34:52.578439  426243 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:34:52.578509  426243 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:34:52.578594  426243 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:34:52.578701  426243 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:34:52.578757  426243 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:34:52.578818  426243 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:34:52.578870  426243 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:34:52.578962  426243 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:34:52.579063  426243 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:34:52.579111  426243 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:34:52.579164  426243 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:34:52.579227  426243 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:34:52.579282  426243 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 13:34:52.579333  426243 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:34:52.579387  426243 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:34:52.579449  426243 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:34:52.579519  426243 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:34:52.579604  426243 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:34:52.581730  426243 out.go:235]   - Booting up control plane ...
	I0127 13:34:52.581854  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:34:52.581961  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:34:52.582058  426243 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:34:52.582184  426243 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:34:52.582253  426243 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:34:52.582290  426243 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:34:52.582417  426243 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 13:34:52.582554  426243 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 13:34:52.582651  426243 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002999225s
	I0127 13:34:52.582795  426243 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 13:34:52.582903  426243 kubeadm.go:310] [api-check] The API server is healthy after 5.501149453s
	I0127 13:34:52.583076  426243 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 13:34:52.583258  426243 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 13:34:52.583323  426243 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 13:34:52.583591  426243 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-174381 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 13:34:52.583679  426243 kubeadm.go:310] [bootstrap-token] Using token: 5hn0ox.etnk5twofkqgha4f
	I0127 13:34:52.584876  426243 out.go:235]   - Configuring RBAC rules ...
	I0127 13:34:52.585016  426243 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 13:34:52.585138  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 13:34:52.585329  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 13:34:52.585515  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 13:34:52.585645  426243 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 13:34:52.585730  426243 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 13:34:52.585829  426243 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 13:34:52.585867  426243 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 13:34:52.585911  426243 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 13:34:52.585917  426243 kubeadm.go:310] 
	I0127 13:34:52.585967  426243 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 13:34:52.585973  426243 kubeadm.go:310] 
	I0127 13:34:52.586066  426243 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 13:34:52.586082  426243 kubeadm.go:310] 
	I0127 13:34:52.586138  426243 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 13:34:52.586214  426243 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 13:34:52.586295  426243 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 13:34:52.586319  426243 kubeadm.go:310] 
	I0127 13:34:52.586416  426243 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 13:34:52.586463  426243 kubeadm.go:310] 
	I0127 13:34:52.586522  426243 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 13:34:52.586532  426243 kubeadm.go:310] 
	I0127 13:34:52.586628  426243 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 13:34:52.586712  426243 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 13:34:52.586770  426243 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 13:34:52.586777  426243 kubeadm.go:310] 
	I0127 13:34:52.586857  426243 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 13:34:52.586926  426243 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 13:34:52.586932  426243 kubeadm.go:310] 
	I0127 13:34:52.587010  426243 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587095  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 \
	I0127 13:34:52.587119  426243 kubeadm.go:310] 	--control-plane 
	I0127 13:34:52.587125  426243 kubeadm.go:310] 
	I0127 13:34:52.587196  426243 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 13:34:52.587204  426243 kubeadm.go:310] 
	I0127 13:34:52.587272  426243 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5hn0ox.etnk5twofkqgha4f \
	I0127 13:34:52.587400  426243 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:49de44d45c875f5076b8ce3ff1b9fe1cf38389f2853b5266e9f7b1fd38ae1a11 
	I0127 13:34:52.587418  426243 cni.go:84] Creating CNI manager for ""
	I0127 13:34:52.587432  426243 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:34:52.588976  426243 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:34:50.482735  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Start
	I0127 13:34:50.482923  429070 main.go:141] libmachine: (newest-cni-639843) starting domain...
	I0127 13:34:50.482942  429070 main.go:141] libmachine: (newest-cni-639843) ensuring networks are active...
	I0127 13:34:50.483967  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network default is active
	I0127 13:34:50.484412  429070 main.go:141] libmachine: (newest-cni-639843) Ensuring network mk-newest-cni-639843 is active
	I0127 13:34:50.484881  429070 main.go:141] libmachine: (newest-cni-639843) getting domain XML...
	I0127 13:34:50.485667  429070 main.go:141] libmachine: (newest-cni-639843) creating domain...
	I0127 13:34:51.790885  429070 main.go:141] libmachine: (newest-cni-639843) waiting for IP...
	I0127 13:34:51.792240  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:51.793056  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:51.793082  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:51.792897  429104 retry.go:31] will retry after 310.654811ms: waiting for domain to come up
	I0127 13:34:52.105667  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.106457  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.106639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.106581  429104 retry.go:31] will retry after 280.140783ms: waiting for domain to come up
	I0127 13:34:52.388057  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.388616  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.388639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.388575  429104 retry.go:31] will retry after 317.414736ms: waiting for domain to come up
	I0127 13:34:52.708208  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:52.708845  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:52.708880  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:52.708795  429104 retry.go:31] will retry after 475.980482ms: waiting for domain to come up
	I0127 13:34:53.186613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.187252  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.187320  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.187240  429104 retry.go:31] will retry after 619.306112ms: waiting for domain to come up
	I0127 13:34:53.807794  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:53.808436  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:53.808485  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:53.808365  429104 retry.go:31] will retry after 838.158661ms: waiting for domain to come up
	I0127 13:34:54.647849  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:54.648442  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:54.648465  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:54.648411  429104 retry.go:31] will retry after 739.028542ms: waiting for domain to come up
	I0127 13:34:51.475609  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:51.489500  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:34:51.489579  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:34:51.536219  427154 cri.go:89] found id: ""
	I0127 13:34:51.536250  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.536262  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:34:51.536270  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:34:51.536334  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:34:51.577494  427154 cri.go:89] found id: ""
	I0127 13:34:51.577522  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.577536  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:34:51.577543  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:34:51.577606  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:34:51.614430  427154 cri.go:89] found id: ""
	I0127 13:34:51.614463  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.614476  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:34:51.614484  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:34:51.614602  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:34:51.666530  427154 cri.go:89] found id: ""
	I0127 13:34:51.666582  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.666591  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:34:51.666597  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:34:51.666653  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:34:51.705538  427154 cri.go:89] found id: ""
	I0127 13:34:51.705567  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.705579  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:34:51.705587  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:34:51.705645  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:34:51.743604  427154 cri.go:89] found id: ""
	I0127 13:34:51.743638  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.743650  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:34:51.743658  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:34:51.743721  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:34:51.778029  427154 cri.go:89] found id: ""
	I0127 13:34:51.778058  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.778070  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:34:51.778078  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:34:51.778148  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:34:51.819260  427154 cri.go:89] found id: ""
	I0127 13:34:51.819294  427154 logs.go:282] 0 containers: []
	W0127 13:34:51.819307  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:34:51.819321  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:34:51.819338  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:34:51.887511  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:34:51.887552  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:34:51.904227  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:34:51.904261  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:34:51.980655  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:34:51.980684  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:34:51.980699  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0127 13:34:52.085922  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:34:52.085973  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:34:54.642029  427154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:34:54.655922  427154 kubeadm.go:597] duration metric: took 4m4.240008337s to restartPrimaryControlPlane
	W0127 13:34:54.656192  427154 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0127 13:34:54.656244  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:34:52.590276  426243 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:34:52.604204  426243 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:34:52.631515  426243 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:34:52.631609  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:52.631702  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-174381 minikube.k8s.io/updated_at=2025_01_27T13_34_52_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=0d71ce9b1959d04f0d9fa7dbc5639a49619ad89b minikube.k8s.io/name=embed-certs-174381 minikube.k8s.io/primary=true
	I0127 13:34:52.663541  426243 ops.go:34] apiserver oom_adj: -16
	I0127 13:34:52.870691  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.371756  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:53.871386  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.371644  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:54.871179  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.370747  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:55.871458  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.371676  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.870824  426243 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 13:34:56.982232  426243 kubeadm.go:1113] duration metric: took 4.350694221s to wait for elevateKubeSystemPrivileges
	I0127 13:34:56.982281  426243 kubeadm.go:394] duration metric: took 6m1.699030467s to StartCluster
	I0127 13:34:56.982314  426243 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.982426  426243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:34:56.983746  426243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:34:56.984032  426243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:34:56.984111  426243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:34:56.984230  426243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-174381"
	I0127 13:34:56.984249  426243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-174381"
	W0127 13:34:56.984258  426243 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:34:56.984273  426243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-174381"
	I0127 13:34:56.984292  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984300  426243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-174381"
	I0127 13:34:56.984303  426243 config.go:182] Loaded profile config "embed-certs-174381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:34:56.984359  426243 addons.go:69] Setting dashboard=true in profile "embed-certs-174381"
	I0127 13:34:56.984372  426243 addons.go:238] Setting addon dashboard=true in "embed-certs-174381"
	W0127 13:34:56.984381  426243 addons.go:247] addon dashboard should already be in state true
	I0127 13:34:56.984405  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984450  426243 addons.go:69] Setting metrics-server=true in profile "embed-certs-174381"
	I0127 13:34:56.984484  426243 addons.go:238] Setting addon metrics-server=true in "embed-certs-174381"
	W0127 13:34:56.984494  426243 addons.go:247] addon metrics-server should already be in state true
	I0127 13:34:56.984524  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:56.984760  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984778  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984799  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984801  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984812  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984826  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.984943  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:56.984977  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:56.986354  426243 out.go:177] * Verifying Kubernetes components...
	I0127 13:34:56.988314  426243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:34:57.003008  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33511
	I0127 13:34:57.003716  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.003737  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0127 13:34:57.004011  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38915
	I0127 13:34:57.004163  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004169  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0127 13:34:57.004457  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004482  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.004559  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004638  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.004651  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.004670  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005012  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005085  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005111  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005198  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.005324  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.005340  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.005955  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005969  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.005970  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.006577  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006617  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.006912  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.006964  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.007601  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.007633  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.009217  426243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-174381"
	W0127 13:34:57.009239  426243 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:34:57.009268  426243 host.go:66] Checking if "embed-certs-174381" exists ...
	I0127 13:34:57.009605  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.009648  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.027242  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0127 13:34:57.027495  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
	I0127 13:34:57.027644  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.027844  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.028181  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028198  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028301  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.028318  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.028539  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028633  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.028694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.028808  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.029068  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0127 13:34:57.029543  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.030162  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.030190  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.030581  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.030601  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031166  426243 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:34:57.031207  426243 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:34:57.031430  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.031637  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I0127 13:34:57.031993  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.032625  426243 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:34:57.032750  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.032765  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.033302  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.033477  426243 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:34:57.033498  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.033587  426243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.033607  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:34:57.033627  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.035541  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.035761  426243 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036794  426243 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:34:57.036804  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:34:57.036814  426243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:34:57.036833  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.037349  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.037808  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:34:57.037827  426243 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:34:57.037856  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.038015  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.038042  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.038208  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.038375  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.038561  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.038701  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.041035  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041500  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.041519  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.041915  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042008  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042189  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042254  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.042272  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042413  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.042592  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.042583  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.042727  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.042852  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.055810  426243 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0127 13:34:57.056237  426243 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:34:57.056772  426243 main.go:141] libmachine: Using API Version  1
	I0127 13:34:57.056801  426243 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:34:57.057165  426243 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:34:57.057501  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetState
	I0127 13:34:57.059165  426243 main.go:141] libmachine: (embed-certs-174381) Calling .DriverName
	I0127 13:34:57.059398  426243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.059418  426243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:34:57.059437  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHHostname
	I0127 13:34:57.062703  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063236  426243 main.go:141] libmachine: (embed-certs-174381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:cc:c6", ip: ""} in network mk-embed-certs-174381: {Iface:virbr2 ExpiryTime:2025-01-27 14:28:42 +0000 UTC Type:0 Mac:52:54:00:dd:cc:c6 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:embed-certs-174381 Clientid:01:52:54:00:dd:cc:c6}
	I0127 13:34:57.063266  426243 main.go:141] libmachine: (embed-certs-174381) DBG | domain embed-certs-174381 has defined IP address 192.168.39.7 and MAC address 52:54:00:dd:cc:c6 in network mk-embed-certs-174381
	I0127 13:34:57.063369  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHPort
	I0127 13:34:57.063544  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHKeyPath
	I0127 13:34:57.063694  426243 main.go:141] libmachine: (embed-certs-174381) Calling .GetSSHUsername
	I0127 13:34:57.063831  426243 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/embed-certs-174381/id_rsa Username:docker}
	I0127 13:34:57.242347  426243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:34:57.326178  426243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352801  426243 node_ready.go:49] node "embed-certs-174381" has status "Ready":"True"
	I0127 13:34:57.352828  426243 node_ready.go:38] duration metric: took 26.613856ms for node "embed-certs-174381" to be "Ready" ...
	I0127 13:34:57.352841  426243 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:34:57.368293  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:57.372941  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:34:57.372962  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:34:57.391676  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:34:57.418587  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:34:57.418616  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:34:57.446588  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:34:57.460844  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:34:57.460869  426243 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:34:57.507947  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:34:57.507976  426243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:34:57.542669  426243 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.542701  426243 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:34:57.630641  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:34:57.630672  426243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:34:57.639506  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:34:57.693463  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:34:57.693498  426243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:34:57.806045  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:34:57.806082  426243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:34:57.930058  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:34:57.930101  426243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:34:58.055263  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:34:58.055295  426243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:34:58.110576  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:34:58.110609  426243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:34:58.202270  426243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.202305  426243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:34:58.293311  426243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:34:58.514356  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.067720868s)
	I0127 13:34:58.514435  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514450  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.514846  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.514876  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.514894  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.514909  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.514920  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.515161  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.515197  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.515860  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516243  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124532885s)
	I0127 13:34:58.516270  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516281  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.516739  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:58.516757  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.516768  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.516776  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.516787  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.517207  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.517230  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549206  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:58.549228  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:58.549614  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:58.549638  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:58.549648  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260116  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.620545789s)
	I0127 13:34:59.260244  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260271  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.260620  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:34:59.260713  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.260730  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.260746  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:34:59.260761  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:34:59.261011  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:34:59.261041  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:34:59.261061  426243 addons.go:479] Verifying addon metrics-server=true in "embed-certs-174381"
	I0127 13:34:59.395546  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.395572  426243 pod_ready.go:82] duration metric: took 2.027244475s for pod "coredns-668d6bf9bc-9ncnm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.395586  426243 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407673  426243 pod_ready.go:93] pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.407695  426243 pod_ready.go:82] duration metric: took 12.102291ms for pod "coredns-668d6bf9bc-hjncm" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.407705  426243 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417168  426243 pod_ready.go:93] pod "etcd-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:34:59.417190  426243 pod_ready.go:82] duration metric: took 9.47928ms for pod "etcd-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:34:59.417199  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:00.168433  426243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.875044372s)
	I0127 13:35:00.168496  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.168520  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.168866  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.170590  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.170645  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.170666  426243 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:00.170673  426243 main.go:141] libmachine: (embed-certs-174381) Calling .Close
	I0127 13:35:00.171042  426243 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:00.171132  426243 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:00.171105  426243 main.go:141] libmachine: (embed-certs-174381) DBG | Closing plugin on server side
	I0127 13:35:00.172686  426243 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-174381 addons enable metrics-server
	
	I0127 13:35:00.174376  426243 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 13:34:59.517968  427154 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.861694115s)
	I0127 13:34:59.518062  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:34:59.536180  427154 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:34:59.547986  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:34:59.561566  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:34:59.561591  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:34:59.561645  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:34:59.574802  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:34:59.574872  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:34:59.588185  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:34:59.598292  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:34:59.598356  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:34:59.608921  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.621764  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:34:59.621825  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:34:59.635526  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:34:59.646582  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:34:59.646644  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:34:59.657975  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:34:59.745239  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:34:59.745337  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:34:59.946676  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:34:59.946890  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:34:59.947050  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:35:00.183580  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:34:55.388471  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:55.388933  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:55.388964  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:55.388914  429104 retry.go:31] will retry after 1.346738272s: waiting for domain to come up
	I0127 13:34:56.737433  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:56.738024  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:56.738081  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:56.738007  429104 retry.go:31] will retry after 1.120347472s: waiting for domain to come up
	I0127 13:34:57.860265  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:34:57.860912  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:34:57.860943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:34:57.860882  429104 retry.go:31] will retry after 2.152534572s: waiting for domain to come up
	I0127 13:35:00.015953  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:00.016579  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:00.016613  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:00.016544  429104 retry.go:31] will retry after 2.588698804s: waiting for domain to come up
	I0127 13:35:00.184950  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:35:00.185049  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:35:00.185140  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:35:00.185334  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:35:00.185435  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:35:00.186094  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:35:00.186301  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:35:00.187022  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:35:00.187455  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:35:00.187928  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:35:00.188334  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:35:00.188531  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:35:00.188608  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:35:00.344156  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:35:00.836083  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:35:00.964664  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:35:01.072929  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:35:01.092946  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:35:01.097538  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:35:01.097961  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:35:01.292953  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:35:00.175566  426243 addons.go:514] duration metric: took 3.191465201s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0127 13:35:01.424773  426243 pod_ready.go:103] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"False"
	I0127 13:35:01.924012  426243 pod_ready.go:93] pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:01.924044  426243 pod_ready.go:82] duration metric: took 2.506836977s for pod "kube-apiserver-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:01.924057  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.607848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:02.608639  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:02.608669  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:02.608620  429104 retry.go:31] will retry after 2.763044938s: waiting for domain to come up
	I0127 13:35:01.294375  427154 out.go:235]   - Booting up control plane ...
	I0127 13:35:01.294569  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:35:01.306014  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:35:01.309847  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:35:01.310062  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:35:01.312436  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:35:02.931062  426243 pod_ready.go:93] pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.931095  426243 pod_ready.go:82] duration metric: took 1.007026875s for pod "kube-controller-manager-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.931108  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936917  426243 pod_ready.go:93] pod "kube-proxy-cjsf9" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:02.936945  426243 pod_ready.go:82] duration metric: took 5.828276ms for pod "kube-proxy-cjsf9" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:02.936957  426243 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444155  426243 pod_ready.go:93] pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace has status "Ready":"True"
	I0127 13:35:03.444192  426243 pod_ready.go:82] duration metric: took 507.225554ms for pod "kube-scheduler-embed-certs-174381" in "kube-system" namespace to be "Ready" ...
	I0127 13:35:03.444203  426243 pod_ready.go:39] duration metric: took 6.091349359s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 13:35:03.444226  426243 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:03.444294  426243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:03.488162  426243 api_server.go:72] duration metric: took 6.504085901s to wait for apiserver process to appear ...
	I0127 13:35:03.488197  426243 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:03.488224  426243 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0127 13:35:03.493586  426243 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0127 13:35:03.494867  426243 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:03.494894  426243 api_server.go:131] duration metric: took 6.689991ms to wait for apiserver health ...
	I0127 13:35:03.494903  426243 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:03.575835  426243 system_pods.go:59] 9 kube-system pods found
	I0127 13:35:03.575871  426243 system_pods.go:61] "coredns-668d6bf9bc-9ncnm" [8ac9ae9c-0e9f-4a4e-ab93-beaf92234ad7] Running
	I0127 13:35:03.575877  426243 system_pods.go:61] "coredns-668d6bf9bc-hjncm" [68641e50-9f99-4811-9752-c7dc0db47502] Running
	I0127 13:35:03.575881  426243 system_pods.go:61] "etcd-embed-certs-174381" [fc5cb0ba-724d-4b3d-a6d0-65644ed57d99] Running
	I0127 13:35:03.575886  426243 system_pods.go:61] "kube-apiserver-embed-certs-174381" [7afdc2d3-86bd-480d-a081-e1475ff21346] Running
	I0127 13:35:03.575890  426243 system_pods.go:61] "kube-controller-manager-embed-certs-174381" [fa410171-2b30-4c79-97d4-87c1549fd75c] Running
	I0127 13:35:03.575894  426243 system_pods.go:61] "kube-proxy-cjsf9" [c395a351-dcc3-4e8c-b6eb-bc3d4386ebf6] Running
	I0127 13:35:03.575901  426243 system_pods.go:61] "kube-scheduler-embed-certs-174381" [ab92b381-fb78-4aa1-bc55-4e47a58f2c32] Running
	I0127 13:35:03.575908  426243 system_pods.go:61] "metrics-server-f79f97bbb-hxlwf" [cb779c78-85f9-48e7-88c3-f087f57547e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:03.575913  426243 system_pods.go:61] "storage-provisioner" [3be7cb61-a4f1-4347-8aa7-8a6e6bbbe6c1] Running
	I0127 13:35:03.575922  426243 system_pods.go:74] duration metric: took 81.012821ms to wait for pod list to return data ...
	I0127 13:35:03.575931  426243 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:03.772597  426243 default_sa.go:45] found service account: "default"
	I0127 13:35:03.772641  426243 default_sa.go:55] duration metric: took 196.700969ms for default service account to be created ...
	I0127 13:35:03.772655  426243 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 13:35:03.976966  426243 system_pods.go:87] 9 kube-system pods found
	I0127 13:35:05.375624  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:05.376167  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:05.376199  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:05.376124  429104 retry.go:31] will retry after 2.824398155s: waiting for domain to come up
	I0127 13:35:08.203385  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:08.203848  429070 main.go:141] libmachine: (newest-cni-639843) DBG | unable to find current IP address of domain newest-cni-639843 in network mk-newest-cni-639843
	I0127 13:35:08.203881  429070 main.go:141] libmachine: (newest-cni-639843) DBG | I0127 13:35:08.203823  429104 retry.go:31] will retry after 4.529537578s: waiting for domain to come up
	I0127 13:35:12.735786  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736343  429070 main.go:141] libmachine: (newest-cni-639843) found domain IP: 192.168.50.22
	I0127 13:35:12.736364  429070 main.go:141] libmachine: (newest-cni-639843) reserving static IP address...
	I0127 13:35:12.736378  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has current primary IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.736707  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.736748  429070 main.go:141] libmachine: (newest-cni-639843) reserved static IP address 192.168.50.22 for domain newest-cni-639843
	I0127 13:35:12.736770  429070 main.go:141] libmachine: (newest-cni-639843) DBG | skip adding static IP to network mk-newest-cni-639843 - found existing host DHCP lease matching {name: "newest-cni-639843", mac: "52:54:00:cd:d6:b3", ip: "192.168.50.22"}
	I0127 13:35:12.736785  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Getting to WaitForSSH function...
	I0127 13:35:12.736810  429070 main.go:141] libmachine: (newest-cni-639843) waiting for SSH...
	I0127 13:35:12.739230  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739563  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.739592  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.739721  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH client type: external
	I0127 13:35:12.739746  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Using SSH private key: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa (-rw-------)
	I0127 13:35:12.739781  429070 main.go:141] libmachine: (newest-cni-639843) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0127 13:35:12.739791  429070 main.go:141] libmachine: (newest-cni-639843) DBG | About to run SSH command:
	I0127 13:35:12.739800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | exit 0
	I0127 13:35:12.866664  429070 main.go:141] libmachine: (newest-cni-639843) DBG | SSH cmd err, output: <nil>: 
	I0127 13:35:12.867059  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetConfigRaw
	I0127 13:35:12.867776  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:12.870461  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.870943  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.870979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.871221  429070 profile.go:143] Saving config to /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/config.json ...
	I0127 13:35:12.871401  429070 machine.go:93] provisionDockerMachine start ...
	I0127 13:35:12.871421  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:12.871618  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.873979  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874373  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.874411  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.874581  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.874746  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.874903  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.875063  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.875221  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.875426  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.875440  429070 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 13:35:12.979102  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 13:35:12.979140  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979406  429070 buildroot.go:166] provisioning hostname "newest-cni-639843"
	I0127 13:35:12.979435  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:12.979647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:12.982631  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983000  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:12.983025  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:12.983170  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:12.983324  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983447  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:12.983605  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:12.983809  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:12.984033  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:12.984051  429070 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-639843 && echo "newest-cni-639843" | sudo tee /etc/hostname
	I0127 13:35:13.107964  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-639843
	
	I0127 13:35:13.108004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.111168  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111589  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.111617  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.111790  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.111995  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112158  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.112289  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.112481  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.112709  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.112733  429070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-639843' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-639843/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-639843' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 13:35:13.226643  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:35:13.226683  429070 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20317-361578/.minikube CaCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20317-361578/.minikube}
	I0127 13:35:13.226734  429070 buildroot.go:174] setting up certificates
	I0127 13:35:13.226749  429070 provision.go:84] configureAuth start
	I0127 13:35:13.226767  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetMachineName
	I0127 13:35:13.227060  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:13.230284  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230719  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.230752  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.230938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.233444  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233798  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.233832  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.233972  429070 provision.go:143] copyHostCerts
	I0127 13:35:13.234039  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem, removing ...
	I0127 13:35:13.234053  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem
	I0127 13:35:13.234146  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/cert.pem (1123 bytes)
	I0127 13:35:13.234301  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem, removing ...
	I0127 13:35:13.234313  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem
	I0127 13:35:13.234354  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/key.pem (1675 bytes)
	I0127 13:35:13.234450  429070 exec_runner.go:144] found /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem, removing ...
	I0127 13:35:13.234462  429070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem
	I0127 13:35:13.234497  429070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20317-361578/.minikube/ca.pem (1078 bytes)
	I0127 13:35:13.234598  429070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem org=jenkins.newest-cni-639843 san=[127.0.0.1 192.168.50.22 localhost minikube newest-cni-639843]
	I0127 13:35:13.505038  429070 provision.go:177] copyRemoteCerts
	I0127 13:35:13.505119  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 13:35:13.505154  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.508162  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508530  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.508555  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.508759  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.508944  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.509117  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.509267  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:13.595888  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 13:35:13.621151  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0127 13:35:13.647473  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 13:35:13.673605  429070 provision.go:87] duration metric: took 446.83901ms to configureAuth
	I0127 13:35:13.673655  429070 buildroot.go:189] setting minikube options for container-runtime
	I0127 13:35:13.673889  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:13.674004  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.676982  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677392  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.677421  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.677573  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.677762  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.677972  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.678123  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.678273  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:13.678496  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:13.678527  429070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0127 13:35:13.921465  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0127 13:35:13.921494  429070 machine.go:96] duration metric: took 1.050079095s to provisionDockerMachine
	I0127 13:35:13.921510  429070 start.go:293] postStartSetup for "newest-cni-639843" (driver="kvm2")
	I0127 13:35:13.921522  429070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 13:35:13.921543  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:13.921954  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 13:35:13.922025  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:13.925574  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.925941  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:13.926012  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:13.926266  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:13.926493  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:13.926675  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:13.926888  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.014753  429070 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 13:35:14.019344  429070 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 13:35:14.019374  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/addons for local assets ...
	I0127 13:35:14.019439  429070 filesync.go:126] Scanning /home/jenkins/minikube-integration/20317-361578/.minikube/files for local assets ...
	I0127 13:35:14.019540  429070 filesync.go:149] local asset: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem -> 3689462.pem in /etc/ssl/certs
	I0127 13:35:14.019659  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 13:35:14.031277  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:14.060121  429070 start.go:296] duration metric: took 138.59357ms for postStartSetup
	I0127 13:35:14.060165  429070 fix.go:56] duration metric: took 23.600678344s for fixHost
	I0127 13:35:14.060188  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.063145  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063514  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.063542  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.063761  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.063980  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064176  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.064340  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.064541  429070 main.go:141] libmachine: Using SSH client type: native
	I0127 13:35:14.064724  429070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.22 22 <nil> <nil>}
	I0127 13:35:14.064738  429070 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 13:35:14.172785  429070 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737984914.150810987
	
	I0127 13:35:14.172823  429070 fix.go:216] guest clock: 1737984914.150810987
	I0127 13:35:14.172832  429070 fix.go:229] Guest: 2025-01-27 13:35:14.150810987 +0000 UTC Remote: 2025-01-27 13:35:14.060169498 +0000 UTC m=+23.763612053 (delta=90.641489ms)
	I0127 13:35:14.172889  429070 fix.go:200] guest clock delta is within tolerance: 90.641489ms
	I0127 13:35:14.172905  429070 start.go:83] releasing machines lock for "newest-cni-639843", held for 23.713435883s
	I0127 13:35:14.172938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.173202  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:14.176163  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176559  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.176600  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.176708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177182  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177351  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:14.177450  429070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 13:35:14.177498  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.177596  429070 ssh_runner.go:195] Run: cat /version.json
	I0127 13:35:14.177625  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:14.180456  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180561  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180800  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180838  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.180910  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.180914  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:14.180944  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:14.181150  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:14.181189  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181344  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:14.181357  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181546  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:14.181536  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.181739  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:14.283980  429070 ssh_runner.go:195] Run: systemctl --version
	I0127 13:35:14.290329  429070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0127 13:35:14.450608  429070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 13:35:14.461512  429070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 13:35:14.461597  429070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 13:35:14.482924  429070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 13:35:14.482951  429070 start.go:495] detecting cgroup driver to use...
	I0127 13:35:14.483022  429070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 13:35:14.503452  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 13:35:14.517592  429070 docker.go:217] disabling cri-docker service (if available) ...
	I0127 13:35:14.517659  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0127 13:35:14.532792  429070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0127 13:35:14.547306  429070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0127 13:35:14.671116  429070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0127 13:35:14.818034  429070 docker.go:233] disabling docker service ...
	I0127 13:35:14.818133  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0127 13:35:14.832550  429070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0127 13:35:14.845137  429070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0127 13:35:14.986833  429070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0127 13:35:15.122943  429070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0127 13:35:15.137706  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 13:35:15.157591  429070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0127 13:35:15.157669  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.168185  429070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0127 13:35:15.168268  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.178876  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.188792  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.198951  429070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 13:35:15.209169  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.219549  429070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.238633  429070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0127 13:35:15.249729  429070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 13:35:15.259178  429070 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 13:35:15.259244  429070 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 13:35:15.272097  429070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 13:35:15.281611  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:15.403472  429070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0127 13:35:15.498842  429070 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0127 13:35:15.498928  429070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0127 13:35:15.505405  429070 start.go:563] Will wait 60s for crictl version
	I0127 13:35:15.505478  429070 ssh_runner.go:195] Run: which crictl
	I0127 13:35:15.509869  429070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 13:35:15.580026  429070 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0127 13:35:15.580122  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.609376  429070 ssh_runner.go:195] Run: crio --version
	I0127 13:35:15.643173  429070 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0127 13:35:15.644483  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetIP
	I0127 13:35:15.647483  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.647905  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:15.647930  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:15.648148  429070 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0127 13:35:15.652911  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:15.668696  429070 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 13:35:15.670127  429070 kubeadm.go:883] updating cluster {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 13:35:15.670264  429070 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 13:35:15.670328  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:15.716362  429070 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0127 13:35:15.716455  429070 ssh_runner.go:195] Run: which lz4
	I0127 13:35:15.721254  429070 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 13:35:15.727443  429070 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 13:35:15.727478  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0127 13:35:17.208454  429070 crio.go:462] duration metric: took 1.487249966s to copy over tarball
	I0127 13:35:17.208542  429070 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 13:35:19.421239  429070 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.212662568s)
	I0127 13:35:19.421271  429070 crio.go:469] duration metric: took 2.21278342s to extract the tarball
	I0127 13:35:19.421281  429070 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 13:35:19.461756  429070 ssh_runner.go:195] Run: sudo crictl images --output json
	I0127 13:35:19.504974  429070 crio.go:514] all images are preloaded for cri-o runtime.
	I0127 13:35:19.505005  429070 cache_images.go:84] Images are preloaded, skipping loading
	I0127 13:35:19.505015  429070 kubeadm.go:934] updating node { 192.168.50.22 8443 v1.32.1 crio true true} ...
	I0127 13:35:19.505173  429070 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-639843 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 13:35:19.505269  429070 ssh_runner.go:195] Run: crio config
	I0127 13:35:19.556732  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:19.556754  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:19.556766  429070 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0127 13:35:19.556791  429070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.22 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-639843 NodeName:newest-cni-639843 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 13:35:19.556951  429070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-639843"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 13:35:19.557032  429070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 13:35:19.567405  429070 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 13:35:19.567483  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 13:35:19.577572  429070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 13:35:19.595555  429070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 13:35:19.612336  429070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0127 13:35:19.630199  429070 ssh_runner.go:195] Run: grep 192.168.50.22	control-plane.minikube.internal$ /etc/hosts
	I0127 13:35:19.634268  429070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 13:35:19.646912  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:19.764087  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:19.783083  429070 certs.go:68] Setting up /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843 for IP: 192.168.50.22
	I0127 13:35:19.783115  429070 certs.go:194] generating shared ca certs ...
	I0127 13:35:19.783139  429070 certs.go:226] acquiring lock for ca certs: {Name:mk2a8ecd4a7a58a165d570ba2e64a4e6bda7627c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:19.783330  429070 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key
	I0127 13:35:19.783386  429070 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key
	I0127 13:35:19.783400  429070 certs.go:256] generating profile certs ...
	I0127 13:35:19.783534  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/client.key
	I0127 13:35:19.783619  429070 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key.505bfb94
	I0127 13:35:19.783671  429070 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key
	I0127 13:35:19.783826  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem (1338 bytes)
	W0127 13:35:19.783866  429070 certs.go:480] ignoring /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946_empty.pem, impossibly tiny 0 bytes
	I0127 13:35:19.783880  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 13:35:19.783913  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/ca.pem (1078 bytes)
	I0127 13:35:19.783939  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/cert.pem (1123 bytes)
	I0127 13:35:19.783961  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/certs/key.pem (1675 bytes)
	I0127 13:35:19.784010  429070 certs.go:484] found cert: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem (1708 bytes)
	I0127 13:35:19.784667  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 13:35:19.821550  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 13:35:19.860184  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 13:35:19.893311  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 13:35:19.926181  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 13:35:19.954565  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 13:35:19.997938  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 13:35:20.022058  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/newest-cni-639843/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 13:35:20.045748  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/ssl/certs/3689462.pem --> /usr/share/ca-certificates/3689462.pem (1708 bytes)
	I0127 13:35:20.069279  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 13:35:20.092959  429070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20317-361578/.minikube/certs/368946.pem --> /usr/share/ca-certificates/368946.pem (1338 bytes)
	I0127 13:35:20.117180  429070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 13:35:20.135202  429070 ssh_runner.go:195] Run: openssl version
	I0127 13:35:20.141197  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3689462.pem && ln -fs /usr/share/ca-certificates/3689462.pem /etc/ssl/certs/3689462.pem"
	I0127 13:35:20.152160  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156810  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 12:22 /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.156871  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3689462.pem
	I0127 13:35:20.162645  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3689462.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 13:35:20.174920  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 13:35:20.187426  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192129  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 12:13 /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.192174  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 13:35:20.198019  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 13:35:20.210195  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368946.pem && ln -fs /usr/share/ca-certificates/368946.pem /etc/ssl/certs/368946.pem"
	I0127 13:35:20.220934  429070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225588  429070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 12:22 /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.225622  429070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368946.pem
	I0127 13:35:20.231516  429070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368946.pem /etc/ssl/certs/51391683.0"
	I0127 13:35:20.243779  429070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 13:35:20.248511  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 13:35:20.254523  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 13:35:20.260441  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 13:35:20.266429  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 13:35:20.272290  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 13:35:20.278051  429070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 13:35:20.284024  429070 kubeadm.go:392] StartCluster: {Name:newest-cni-639843 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:newest-cni-639843 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:35:20.284105  429070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0127 13:35:20.284164  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.332523  429070 cri.go:89] found id: ""
	I0127 13:35:20.332587  429070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 13:35:20.344932  429070 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 13:35:20.344959  429070 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 13:35:20.345011  429070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 13:35:20.355729  429070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 13:35:20.356795  429070 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-639843" does not appear in /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:20.357505  429070 kubeconfig.go:62] /home/jenkins/minikube-integration/20317-361578/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-639843" cluster setting kubeconfig missing "newest-cni-639843" context setting]
	I0127 13:35:20.358374  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:20.360037  429070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 13:35:20.371572  429070 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.22
	I0127 13:35:20.371606  429070 kubeadm.go:1160] stopping kube-system containers ...
	I0127 13:35:20.371622  429070 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0127 13:35:20.371679  429070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0127 13:35:20.418797  429070 cri.go:89] found id: ""
	I0127 13:35:20.418873  429070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 13:35:20.437304  429070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:35:20.447636  429070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:35:20.447660  429070 kubeadm.go:157] found existing configuration files:
	
	I0127 13:35:20.447704  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:35:20.458280  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:35:20.458335  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:35:20.469304  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:35:20.478639  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:35:20.478689  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:35:20.488624  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.497867  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:35:20.497908  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:35:20.507379  429070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:35:20.516362  429070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:35:20.516416  429070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:35:20.525787  429070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 13:35:20.542646  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:20.671597  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.498726  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.899789  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:21.965210  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:22.062165  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:22.062252  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:22.563318  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.063066  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:23.082649  429070 api_server.go:72] duration metric: took 1.020482627s to wait for apiserver process to appear ...
	I0127 13:35:23.082686  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:23.082711  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:23.083244  429070 api_server.go:269] stopped: https://192.168.50.22:8443/healthz: Get "https://192.168.50.22:8443/healthz": dial tcp 192.168.50.22:8443: connect: connection refused
	I0127 13:35:23.583699  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.503776  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 13:35:25.503807  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 13:35:25.503825  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.547403  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.547434  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:25.583659  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:25.589328  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:25.589357  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.082833  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.087881  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.087908  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:26.583159  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:26.592115  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 13:35:26.592148  429070 api_server.go:103] status: https://192.168.50.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 13:35:27.083703  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.090407  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.098905  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.098928  429070 api_server.go:131] duration metric: took 4.01623437s to wait for apiserver health ...
	I0127 13:35:27.098938  429070 cni.go:84] Creating CNI manager for ""
	I0127 13:35:27.098944  429070 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 13:35:27.100651  429070 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 13:35:27.101855  429070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 13:35:27.116286  429070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0127 13:35:27.139348  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.158680  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.158717  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.158730  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.158741  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.158748  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.158757  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 13:35:27.158766  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.158776  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.158785  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 13:35:27.158819  429070 system_pods.go:74] duration metric: took 19.446392ms to wait for pod list to return data ...
	I0127 13:35:27.158832  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.168338  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.168376  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.168392  429070 node_conditions.go:105] duration metric: took 9.550643ms to run NodePressure ...
	I0127 13:35:27.168416  429070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 13:35:27.459759  429070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 13:35:27.473184  429070 ops.go:34] apiserver oom_adj: -16
	I0127 13:35:27.473212  429070 kubeadm.go:597] duration metric: took 7.128244476s to restartPrimaryControlPlane
	I0127 13:35:27.473226  429070 kubeadm.go:394] duration metric: took 7.18920723s to StartCluster
	I0127 13:35:27.473251  429070 settings.go:142] acquiring lock: {Name:mkda0261525b914a5c9ff035bef267a7ec4017dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.473341  429070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:35:27.475111  429070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20317-361578/kubeconfig: {Name:mk5022a08b57363442d401324a9652eca48de97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 13:35:27.475373  429070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.22 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0127 13:35:27.475451  429070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 13:35:27.475562  429070 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-639843"
	I0127 13:35:27.475584  429070 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-639843"
	W0127 13:35:27.475598  429070 addons.go:247] addon storage-provisioner should already be in state true
	I0127 13:35:27.475598  429070 addons.go:69] Setting dashboard=true in profile "newest-cni-639843"
	I0127 13:35:27.475600  429070 addons.go:69] Setting metrics-server=true in profile "newest-cni-639843"
	I0127 13:35:27.475621  429070 addons.go:238] Setting addon dashboard=true in "newest-cni-639843"
	I0127 13:35:27.475629  429070 addons.go:238] Setting addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:27.475639  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	W0127 13:35:27.475643  429070 addons.go:247] addon metrics-server should already be in state true
	I0127 13:35:27.475676  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.475582  429070 addons.go:69] Setting default-storageclass=true in profile "newest-cni-639843"
	I0127 13:35:27.475611  429070 config.go:182] Loaded profile config "newest-cni-639843": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:35:27.475708  429070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-639843"
	W0127 13:35:27.475630  429070 addons.go:247] addon dashboard should already be in state true
	I0127 13:35:27.475812  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.476070  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476077  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476115  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476134  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476159  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.476168  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476195  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.476204  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.477011  429070 out.go:177] * Verifying Kubernetes components...
	I0127 13:35:27.478509  429070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 13:35:27.493703  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0127 13:35:27.493801  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I0127 13:35:27.493955  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0127 13:35:27.494221  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494259  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494795  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494819  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.494840  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.494932  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.494956  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495188  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495296  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.495464  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.495481  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.495764  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495798  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.495812  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495819  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.495871  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.496119  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0127 13:35:27.496433  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.496529  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.496572  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.496893  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.496916  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.497264  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.497502  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.502029  429070 addons.go:238] Setting addon default-storageclass=true in "newest-cni-639843"
	W0127 13:35:27.502051  429070 addons.go:247] addon default-storageclass should already be in state true
	I0127 13:35:27.502080  429070 host.go:66] Checking if "newest-cni-639843" exists ...
	I0127 13:35:27.502830  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.502873  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.512816  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I0127 13:35:27.513096  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0127 13:35:27.513275  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I0127 13:35:27.535151  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535226  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535266  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.535748  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535766  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.535769  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.535791  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.536087  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536347  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.536392  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.536559  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.537321  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.537343  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.537676  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.537946  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.538406  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539127  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.539700  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.540468  429070 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 13:35:27.540479  429070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 13:35:27.541259  429070 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0127 13:35:27.542133  429070 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.542154  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 13:35:27.542174  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.542782  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 13:35:27.542801  429070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 13:35:27.542820  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.543610  429070 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0127 13:35:27.544743  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 13:35:27.544762  429070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 13:35:27.544780  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.545935  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546330  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.546364  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.546495  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.546708  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.546872  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.547017  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.547822  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548084  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548244  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548291  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548448  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548585  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.548619  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.548647  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548786  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.548800  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.548938  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.548980  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.549036  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.549180  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.554799  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0127 13:35:27.555253  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.555780  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.555800  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.556187  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.556616  429070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:35:27.556646  429070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:35:27.574277  429070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40511
	I0127 13:35:27.574815  429070 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:35:27.575396  429070 main.go:141] libmachine: Using API Version  1
	I0127 13:35:27.575420  429070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:35:27.575741  429070 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:35:27.575966  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetState
	I0127 13:35:27.577346  429070 main.go:141] libmachine: (newest-cni-639843) Calling .DriverName
	I0127 13:35:27.577556  429070 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.577574  429070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 13:35:27.577594  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHHostname
	I0127 13:35:27.580061  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580408  429070 main.go:141] libmachine: (newest-cni-639843) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:d6:b3", ip: ""} in network mk-newest-cni-639843: {Iface:virbr3 ExpiryTime:2025-01-27 14:34:08 +0000 UTC Type:0 Mac:52:54:00:cd:d6:b3 Iaid: IPaddr:192.168.50.22 Prefix:24 Hostname:newest-cni-639843 Clientid:01:52:54:00:cd:d6:b3}
	I0127 13:35:27.580432  429070 main.go:141] libmachine: (newest-cni-639843) DBG | domain newest-cni-639843 has defined IP address 192.168.50.22 and MAC address 52:54:00:cd:d6:b3 in network mk-newest-cni-639843
	I0127 13:35:27.580659  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHPort
	I0127 13:35:27.580836  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHKeyPath
	I0127 13:35:27.580987  429070 main.go:141] libmachine: (newest-cni-639843) Calling .GetSSHUsername
	I0127 13:35:27.581148  429070 sshutil.go:53] new ssh client: &{IP:192.168.50.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/newest-cni-639843/id_rsa Username:docker}
	I0127 13:35:27.713210  429070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 13:35:27.737971  429070 api_server.go:52] waiting for apiserver process to appear ...
	I0127 13:35:27.738049  429070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 13:35:27.755609  429070 api_server.go:72] duration metric: took 280.198045ms to wait for apiserver process to appear ...
	I0127 13:35:27.755639  429070 api_server.go:88] waiting for apiserver healthz status ...
	I0127 13:35:27.755660  429070 api_server.go:253] Checking apiserver healthz at https://192.168.50.22:8443/healthz ...
	I0127 13:35:27.765216  429070 api_server.go:279] https://192.168.50.22:8443/healthz returned 200:
	ok
	I0127 13:35:27.767614  429070 api_server.go:141] control plane version: v1.32.1
	I0127 13:35:27.767639  429070 api_server.go:131] duration metric: took 11.991322ms to wait for apiserver health ...
	I0127 13:35:27.767650  429070 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 13:35:27.781696  429070 system_pods.go:59] 8 kube-system pods found
	I0127 13:35:27.781778  429070 system_pods.go:61] "coredns-668d6bf9bc-lvnnf" [90a484c9-993e-4330-b0ee-5ee2db376d30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 13:35:27.781799  429070 system_pods.go:61] "etcd-newest-cni-639843" [3aed6af4-5010-4768-99c1-756dad69bd8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 13:35:27.781815  429070 system_pods.go:61] "kube-apiserver-newest-cni-639843" [3a136fd0-e2d4-4b39-89fa-c962f54cc0be] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 13:35:27.781827  429070 system_pods.go:61] "kube-controller-manager-newest-cni-639843" [69909786-3c53-45f1-8bc2-495b84c4566e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 13:35:27.781836  429070 system_pods.go:61] "kube-proxy-t858p" [4538a8a6-4147-4809-b241-e70931e3faaa] Running
	I0127 13:35:27.781862  429070 system_pods.go:61] "kube-scheduler-newest-cni-639843" [c070e009-b7c2-4ff5-9f5a-8240f48e2908] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 13:35:27.781874  429070 system_pods.go:61] "metrics-server-f79f97bbb-r2mgv" [26acbe63-87ce-4b17-afcc-7403176ea056] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 13:35:27.781884  429070 system_pods.go:61] "storage-provisioner" [f3767b22-55b2-4cb8-80ca-bd40493ca276] Running
	I0127 13:35:27.781895  429070 system_pods.go:74] duration metric: took 14.236485ms to wait for pod list to return data ...
	I0127 13:35:27.781908  429070 default_sa.go:34] waiting for default service account to be created ...
	I0127 13:35:27.787854  429070 default_sa.go:45] found service account: "default"
	I0127 13:35:27.787884  429070 default_sa.go:55] duration metric: took 5.965578ms for default service account to be created ...
	I0127 13:35:27.787899  429070 kubeadm.go:582] duration metric: took 312.493014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 13:35:27.787924  429070 node_conditions.go:102] verifying NodePressure condition ...
	I0127 13:35:27.793927  429070 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 13:35:27.793949  429070 node_conditions.go:123] node cpu capacity is 2
	I0127 13:35:27.793961  429070 node_conditions.go:105] duration metric: took 6.028431ms to run NodePressure ...
	I0127 13:35:27.793975  429070 start.go:241] waiting for startup goroutines ...
	I0127 13:35:27.806081  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 13:35:27.851437  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 13:35:27.912936  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 13:35:27.912967  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0127 13:35:27.941546  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 13:35:27.941579  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 13:35:28.017628  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 13:35:28.017663  429070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 13:35:28.027973  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 13:35:28.028016  429070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 13:35:28.097111  429070 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.097146  429070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 13:35:28.148404  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 13:35:28.148439  429070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 13:35:28.272234  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 13:35:28.273446  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 13:35:28.273473  429070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0127 13:35:28.324863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 13:35:28.324897  429070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 13:35:28.400474  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 13:35:28.400504  429070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 13:35:28.460550  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 13:35:28.460583  429070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 13:35:28.508999  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 13:35:28.509031  429070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 13:35:28.555538  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555570  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.555889  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.555906  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.555915  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.555923  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.556151  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.556180  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.556196  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.564252  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:28.564277  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:28.564553  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:28.564574  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:28.564893  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:28.605863  429070 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:28.605896  429070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 13:35:28.650259  429070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 13:35:29.517093  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.66560932s)
	I0127 13:35:29.517160  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517173  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517607  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:29.517645  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517655  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.517664  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.517672  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.517974  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.517996  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741184  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.46890411s)
	I0127 13:35:29.741241  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741252  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.741558  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.741576  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.741586  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:29.741609  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:29.742656  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:29.742680  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:29.742692  429070 addons.go:479] Verifying addon metrics-server=true in "newest-cni-639843"
	I0127 13:35:29.742659  429070 main.go:141] libmachine: (newest-cni-639843) DBG | Closing plugin on server side
	I0127 13:35:30.069134  429070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.418812542s)
	I0127 13:35:30.069214  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069233  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069539  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069559  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.069568  429070 main.go:141] libmachine: Making call to close driver server
	I0127 13:35:30.069575  429070 main.go:141] libmachine: (newest-cni-639843) Calling .Close
	I0127 13:35:30.069840  429070 main.go:141] libmachine: Successfully made call to close driver server
	I0127 13:35:30.069856  429070 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 13:35:30.071209  429070 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-639843 addons enable metrics-server
	
	I0127 13:35:30.072569  429070 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0127 13:35:30.073970  429070 addons.go:514] duration metric: took 2.598533083s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0127 13:35:30.074007  429070 start.go:246] waiting for cluster config update ...
	I0127 13:35:30.074019  429070 start.go:255] writing updated cluster config ...
	I0127 13:35:30.074258  429070 ssh_runner.go:195] Run: rm -f paused
	I0127 13:35:30.125745  429070 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 13:35:30.127324  429070 out.go:177] * Done! kubectl is now configured to use "newest-cni-639843" cluster and "default" namespace by default
	I0127 13:35:41.313958  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:35:41.315406  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:41.315596  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:46.316260  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:46.316520  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:35:56.316974  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:35:56.317208  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:16.318338  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:16.318524  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320677  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:36:56.320945  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:36:56.320963  427154 kubeadm.go:310] 
	I0127 13:36:56.321020  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:36:56.321085  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:36:56.321099  427154 kubeadm.go:310] 
	I0127 13:36:56.321165  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:36:56.321228  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:36:56.321357  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:36:56.321378  427154 kubeadm.go:310] 
	I0127 13:36:56.321499  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:36:56.321545  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:36:56.321574  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:36:56.321580  427154 kubeadm.go:310] 
	I0127 13:36:56.321720  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:36:56.321827  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:36:56.321840  427154 kubeadm.go:310] 
	I0127 13:36:56.321935  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:36:56.322018  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:36:56.322099  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:36:56.322162  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:36:56.322169  427154 kubeadm.go:310] 
	I0127 13:36:56.323303  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:36:56.323399  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:36:56.323478  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0127 13:36:56.323617  427154 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 13:36:56.323664  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0127 13:36:56.804696  427154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 13:36:56.819996  427154 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 13:36:56.830103  427154 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 13:36:56.830120  427154 kubeadm.go:157] found existing configuration files:
	
	I0127 13:36:56.830161  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 13:36:56.839297  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 13:36:56.839351  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 13:36:56.848603  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 13:36:56.857433  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 13:36:56.857500  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 13:36:56.867735  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.876669  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 13:36:56.876721  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 13:36:56.885857  427154 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 13:36:56.894734  427154 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 13:36:56.894788  427154 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 13:36:56.904112  427154 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 13:36:56.975515  427154 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0127 13:36:56.975724  427154 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 13:36:57.110596  427154 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 13:36:57.110748  427154 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 13:36:57.110890  427154 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 13:36:57.287182  427154 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 13:36:57.289124  427154 out.go:235]   - Generating certificates and keys ...
	I0127 13:36:57.289247  427154 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 13:36:57.289310  427154 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 13:36:57.289405  427154 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 13:36:57.289504  427154 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0127 13:36:57.289595  427154 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 13:36:57.289665  427154 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0127 13:36:57.289780  427154 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0127 13:36:57.290345  427154 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0127 13:36:57.291337  427154 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 13:36:57.292274  427154 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 13:36:57.292554  427154 kubeadm.go:310] [certs] Using the existing "sa" key
	I0127 13:36:57.292622  427154 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 13:36:57.586245  427154 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 13:36:57.746278  427154 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 13:36:57.846816  427154 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 13:36:57.985775  427154 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 13:36:58.007369  427154 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 13:36:58.008417  427154 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 13:36:58.008485  427154 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 13:36:58.134182  427154 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 13:36:58.136066  427154 out.go:235]   - Booting up control plane ...
	I0127 13:36:58.136194  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 13:36:58.148785  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 13:36:58.148921  427154 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 13:36:58.149274  427154 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 13:36:58.153395  427154 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 13:37:38.155987  427154 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0127 13:37:38.156613  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:38.156831  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:43.157356  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:43.157567  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:37:53.158341  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:37:53.158675  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:13.158624  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:13.158876  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157583  427154 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 13:38:53.157824  427154 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 13:38:53.157839  427154 kubeadm.go:310] 
	I0127 13:38:53.157896  427154 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0127 13:38:53.157954  427154 kubeadm.go:310] 		timed out waiting for the condition
	I0127 13:38:53.157966  427154 kubeadm.go:310] 
	I0127 13:38:53.158014  427154 kubeadm.go:310] 	This error is likely caused by:
	I0127 13:38:53.158064  427154 kubeadm.go:310] 		- The kubelet is not running
	I0127 13:38:53.158222  427154 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 13:38:53.158234  427154 kubeadm.go:310] 
	I0127 13:38:53.158404  427154 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 13:38:53.158453  427154 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0127 13:38:53.158483  427154 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0127 13:38:53.158491  427154 kubeadm.go:310] 
	I0127 13:38:53.158624  427154 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 13:38:53.158726  427154 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 13:38:53.158741  427154 kubeadm.go:310] 
	I0127 13:38:53.158894  427154 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0127 13:38:53.159040  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0127 13:38:53.159165  427154 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0127 13:38:53.159264  427154 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0127 13:38:53.159275  427154 kubeadm.go:310] 
	I0127 13:38:53.159902  427154 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 13:38:53.160042  427154 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 13:38:53.160128  427154 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0127 13:38:53.160213  427154 kubeadm.go:394] duration metric: took 8m2.798471593s to StartCluster
	I0127 13:38:53.160286  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0127 13:38:53.160377  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0127 13:38:53.205471  427154 cri.go:89] found id: ""
	I0127 13:38:53.205496  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.205504  427154 logs.go:284] No container was found matching "kube-apiserver"
	I0127 13:38:53.205510  427154 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0127 13:38:53.205577  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0127 13:38:53.240500  427154 cri.go:89] found id: ""
	I0127 13:38:53.240532  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.240543  427154 logs.go:284] No container was found matching "etcd"
	I0127 13:38:53.240564  427154 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0127 13:38:53.240625  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0127 13:38:53.282232  427154 cri.go:89] found id: ""
	I0127 13:38:53.282267  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.282279  427154 logs.go:284] No container was found matching "coredns"
	I0127 13:38:53.282287  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0127 13:38:53.282354  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0127 13:38:53.315589  427154 cri.go:89] found id: ""
	I0127 13:38:53.315643  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.315659  427154 logs.go:284] No container was found matching "kube-scheduler"
	I0127 13:38:53.315666  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0127 13:38:53.315735  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0127 13:38:53.349806  427154 cri.go:89] found id: ""
	I0127 13:38:53.349836  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.349844  427154 logs.go:284] No container was found matching "kube-proxy"
	I0127 13:38:53.349850  427154 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0127 13:38:53.349906  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0127 13:38:53.382052  427154 cri.go:89] found id: ""
	I0127 13:38:53.382084  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.382095  427154 logs.go:284] No container was found matching "kube-controller-manager"
	I0127 13:38:53.382103  427154 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0127 13:38:53.382176  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0127 13:38:53.416057  427154 cri.go:89] found id: ""
	I0127 13:38:53.416091  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.416103  427154 logs.go:284] No container was found matching "kindnet"
	I0127 13:38:53.416120  427154 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0127 13:38:53.416185  427154 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0127 13:38:53.449983  427154 cri.go:89] found id: ""
	I0127 13:38:53.450017  427154 logs.go:282] 0 containers: []
	W0127 13:38:53.450029  427154 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0127 13:38:53.450046  427154 logs.go:123] Gathering logs for container status ...
	I0127 13:38:53.450064  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 13:38:53.498208  427154 logs.go:123] Gathering logs for kubelet ...
	I0127 13:38:53.498242  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 13:38:53.552441  427154 logs.go:123] Gathering logs for dmesg ...
	I0127 13:38:53.552472  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 13:38:53.567811  427154 logs.go:123] Gathering logs for describe nodes ...
	I0127 13:38:53.567841  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 13:38:53.646625  427154 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 13:38:53.646651  427154 logs.go:123] Gathering logs for CRI-O ...
	I0127 13:38:53.646667  427154 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0127 13:38:53.748675  427154 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 13:38:53.748747  427154 out.go:270] * 
	W0127 13:38:53.748849  427154 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.748865  427154 out.go:270] * 
	W0127 13:38:53.749670  427154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 13:38:53.753264  427154 out.go:201] 
	W0127 13:38:53.754315  427154 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 13:38:53.754372  427154 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 13:38:53.754397  427154 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 13:38:53.755624  427154 out.go:201] 
	
	
	==> CRI-O <==
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.404114920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986043404078024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c3c3bcc-4773-4247-b809-68828d88c2c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.404806725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac8465a2-88f3-4c5b-8ce2-e1a7dbbab5e3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.404858196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac8465a2-88f3-4c5b-8ce2-e1a7dbbab5e3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.404889427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ac8465a2-88f3-4c5b-8ce2-e1a7dbbab5e3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.436595524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98155773-e540-4058-8b13-a212cf579b23 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.436726289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98155773-e540-4058-8b13-a212cf579b23 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.438289215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e596e73-2a1c-47f6-85aa-2185a6f8b64d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.438654380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986043438636614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e596e73-2a1c-47f6-85aa-2185a6f8b64d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.439178121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a0e82a2-494b-40cd-a1fa-7c32577d156f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.439249065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a0e82a2-494b-40cd-a1fa-7c32577d156f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.439286549Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8a0e82a2-494b-40cd-a1fa-7c32577d156f name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.469647100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef12162d-e0ae-4df4-8b4e-91f709b372d0 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.469704955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef12162d-e0ae-4df4-8b4e-91f709b372d0 name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.470907932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da887f50-72cb-4a4a-b827-696bb61e6cb2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.471288914Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986043471266197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da887f50-72cb-4a4a-b827-696bb61e6cb2 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.471826844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=342367e1-6569-4880-be6f-d99980c671a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.471876515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=342367e1-6569-4880-be6f-d99980c671a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.471907075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=342367e1-6569-4880-be6f-d99980c671a6 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.506349996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7028aa06-b48e-4db7-be0e-8b8fae78911c name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.506433716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7028aa06-b48e-4db7-be0e-8b8fae78911c name=/runtime.v1.RuntimeService/Version
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.507840386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=784f90a9-a130-4c8f-af5d-a17d6e1a6b4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.508267664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737986043508233152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=784f90a9-a130-4c8f-af5d-a17d6e1a6b4d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.509018682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05486818-c744-4c96-8c84-b910d99ccb62 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.509072683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05486818-c744-4c96-8c84-b910d99ccb62 name=/runtime.v1.RuntimeService/ListContainers
	Jan 27 13:54:03 old-k8s-version-838260 crio[636]: time="2025-01-27 13:54:03.509105526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=05486818-c744-4c96-8c84-b910d99ccb62 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan27 13:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055193] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042878] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.106143] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.007175] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.630545] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653404] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.059408] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056860] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.174451] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.138641] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.249166] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +7.716430] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.059428] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.921451] systemd-fstab-generator[1016]: Ignoring "noauto" option for root device
	[Jan27 13:31] kauditd_printk_skb: 46 callbacks suppressed
	[Jan27 13:35] systemd-fstab-generator[5097]: Ignoring "noauto" option for root device
	[Jan27 13:36] systemd-fstab-generator[5380]: Ignoring "noauto" option for root device
	[  +0.055705] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:54:03 up 23 min,  0 users,  load average: 0.03, 0.03, 0.04
	Linux old-k8s-version-838260 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00061a6f0)
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d2def0, 0x4f0ac20, 0xc000950d70, 0x1, 0xc0001000c0)
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00073ca80, 0xc0001000c0)
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00087ff50, 0xc0009667e0)
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7235]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 27 13:54:02 old-k8s-version-838260 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 27 13:54:02 old-k8s-version-838260 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 27 13:54:02 old-k8s-version-838260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 178.
	Jan 27 13:54:02 old-k8s-version-838260 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 27 13:54:02 old-k8s-version-838260 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7253]: I0127 13:54:02.933165    7253 server.go:416] Version: v1.20.0
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7253]: I0127 13:54:02.933690    7253 server.go:837] Client rotation is on, will bootstrap in background
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7253]: I0127 13:54:02.935606    7253 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7253]: W0127 13:54:02.936715    7253 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 27 13:54:02 old-k8s-version-838260 kubelet[7253]: I0127 13:54:02.936729    7253 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 2 (227.214569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-838260" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (367.10s)

                                                
                                    

Test pass (261/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 33.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 17.44
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.14
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 83.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 209.83
31 TestAddons/serial/GCPAuth/Namespaces 1.69
32 TestAddons/serial/GCPAuth/FakeCredentials 13.5
35 TestAddons/parallel/Registry 28.39
37 TestAddons/parallel/InspektorGadget 12.09
38 TestAddons/parallel/MetricsServer 5.74
40 TestAddons/parallel/CSI 53.53
41 TestAddons/parallel/Headlamp 26.08
42 TestAddons/parallel/CloudSpanner 5.58
43 TestAddons/parallel/LocalPath 69.34
44 TestAddons/parallel/NvidiaDevicePlugin 6.8
45 TestAddons/parallel/Yakd 11.92
47 TestAddons/StoppedEnableDisable 91.26
48 TestCertOptions 61.9
49 TestCertExpiration 311.32
51 TestForceSystemdFlag 64.42
52 TestForceSystemdEnv 71.14
54 TestKVMDriverInstallOrUpdate 8.1
58 TestErrorSpam/setup 43.61
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.57
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 5.7
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 54.86
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 36.8
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
75 TestFunctional/serial/CacheCmd/cache/add_local 2.78
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 36.41
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.53
86 TestFunctional/serial/LogsFileCmd 1.42
87 TestFunctional/serial/InvalidService 5.4
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 29.02
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.85
97 TestFunctional/parallel/ServiceCmdConnect 11.46
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 50.86
101 TestFunctional/parallel/SSHCmd 0.47
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 27.5
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.24
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
113 TestFunctional/parallel/License 0.81
123 TestFunctional/parallel/MountCmd/any-port 13.75
124 TestFunctional/parallel/ServiceCmd/DeployApp 8.52
125 TestFunctional/parallel/MountCmd/specific-port 1.61
126 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
127 TestFunctional/parallel/Version/short 0.05
128 TestFunctional/parallel/Version/components 0.53
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
133 TestFunctional/parallel/ImageCommands/ImageBuild 7.15
134 TestFunctional/parallel/ImageCommands/Setup 2.44
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.29
136 TestFunctional/parallel/ServiceCmd/List 0.93
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.85
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
140 TestFunctional/parallel/ServiceCmd/Format 0.3
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.46
142 TestFunctional/parallel/ServiceCmd/URL 0.27
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.12
147 TestFunctional/parallel/ImageCommands/ImageRemove 1.52
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
150 TestFunctional/parallel/ProfileCmd/profile_list 0.38
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 204.94
160 TestMultiControlPlane/serial/DeployApp 10.82
161 TestMultiControlPlane/serial/PingHostFromPods 1.25
162 TestMultiControlPlane/serial/AddWorkerNode 57.86
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
165 TestMultiControlPlane/serial/CopyFile 13.13
166 TestMultiControlPlane/serial/StopSecondaryNode 91.64
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
168 TestMultiControlPlane/serial/RestartSecondaryNode 48.51
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 496.14
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.25
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 272.69
174 TestMultiControlPlane/serial/RestartCluster 116.49
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
176 TestMultiControlPlane/serial/AddSecondaryNode 87.52
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestJSONOutput/start/Command 82.24
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.69
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.38
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 90.82
213 TestMountStart/serial/StartWithMountFirst 27.16
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 30.59
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.68
218 TestMountStart/serial/VerifyMountPostDelete 0.37
219 TestMountStart/serial/Stop 1.63
220 TestMountStart/serial/RestartStopped 24.73
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 112.67
225 TestMultiNode/serial/DeployApp2Nodes 8.72
226 TestMultiNode/serial/PingHostFrom2Pods 0.84
227 TestMultiNode/serial/AddNode 53.25
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.33
231 TestMultiNode/serial/StopNode 2.4
232 TestMultiNode/serial/StartAfterStop 41.14
233 TestMultiNode/serial/RestartKeepsNodes 327.89
234 TestMultiNode/serial/DeleteNode 2.76
235 TestMultiNode/serial/StopMultiNode 181.85
236 TestMultiNode/serial/RestartMultiNode 105.42
237 TestMultiNode/serial/ValidateNameConflict 43.64
244 TestScheduledStopUnix 115.51
248 TestRunningBinaryUpgrade 199.2
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 98.21
255 TestNoKubernetes/serial/StartWithStopK8s 43.65
256 TestNoKubernetes/serial/Start 29.29
264 TestNetworkPlugins/group/false 3.11
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
269 TestNoKubernetes/serial/ProfileList 16.05
278 TestPause/serial/Start 53.02
279 TestNoKubernetes/serial/Stop 1.28
280 TestNoKubernetes/serial/StartNoArgs 30.78
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
283 TestStoppedBinaryUpgrade/Setup 4.2
284 TestStoppedBinaryUpgrade/Upgrade 132.74
285 TestNetworkPlugins/group/auto/Start 90.72
286 TestNetworkPlugins/group/kindnet/Start 93.64
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
288 TestNetworkPlugins/group/calico/Start 103.35
289 TestNetworkPlugins/group/auto/KubeletFlags 0.26
290 TestNetworkPlugins/group/auto/NetCatPod 11.93
291 TestNetworkPlugins/group/auto/DNS 0.16
292 TestNetworkPlugins/group/auto/Localhost 0.14
293 TestNetworkPlugins/group/auto/HairPin 0.14
294 TestNetworkPlugins/group/custom-flannel/Start 81.57
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
297 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
298 TestNetworkPlugins/group/enable-default-cni/Start 71.24
299 TestNetworkPlugins/group/kindnet/DNS 0.13
300 TestNetworkPlugins/group/kindnet/Localhost 0.13
301 TestNetworkPlugins/group/kindnet/HairPin 0.12
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/calico/KubeletFlags 0.24
304 TestNetworkPlugins/group/calico/NetCatPod 12.26
305 TestNetworkPlugins/group/flannel/Start 89.19
306 TestNetworkPlugins/group/calico/DNS 0.15
307 TestNetworkPlugins/group/calico/Localhost 0.13
308 TestNetworkPlugins/group/calico/HairPin 0.12
309 TestNetworkPlugins/group/bridge/Start 73.05
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.26
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.45
314 TestNetworkPlugins/group/custom-flannel/DNS 0.15
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
323 TestStartStop/group/no-preload/serial/FirstStart 95.99
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
326 TestNetworkPlugins/group/flannel/NetCatPod 12.22
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
328 TestNetworkPlugins/group/bridge/NetCatPod 15.71
329 TestNetworkPlugins/group/flannel/DNS 0.18
330 TestNetworkPlugins/group/flannel/Localhost 0.15
331 TestNetworkPlugins/group/flannel/HairPin 0.14
332 TestNetworkPlugins/group/bridge/DNS 0.17
333 TestNetworkPlugins/group/bridge/Localhost 0.12
334 TestNetworkPlugins/group/bridge/HairPin 0.12
336 TestStartStop/group/embed-certs/serial/FirstStart 96.9
338 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.24
339 TestStartStop/group/no-preload/serial/DeployApp 13.31
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
341 TestStartStop/group/no-preload/serial/Stop 91.08
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.27
343 TestStartStop/group/embed-certs/serial/DeployApp 13.26
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.19
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
347 TestStartStop/group/embed-certs/serial/Stop 91.05
348 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
351 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 326.9
352 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/old-k8s-version/serial/Stop 5.29
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.66
364 TestStartStop/group/newest-cni/serial/FirstStart 49.09
365 TestStartStop/group/newest-cni/serial/DeployApp 0
366 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
367 TestStartStop/group/newest-cni/serial/Stop 7.35
368 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
369 TestStartStop/group/newest-cni/serial/SecondStart 40.11
370 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
371 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
373 TestStartStop/group/newest-cni/serial/Pause 2.31
x
+
TestDownloadOnly/v1.20.0/json-events (33.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-484253 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-484253 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (33.624761989s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (33.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 12:12:16.968654  368946 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0127 12:12:16.968812  368946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-484253
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-484253: exit status 85 (63.507963ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-484253 | jenkins | v1.35.0 | 27 Jan 25 12:11 UTC |          |
	|         | -p download-only-484253        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:11:43
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:11:43.388604  368957 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:11:43.388733  368957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:11:43.388743  368957 out.go:358] Setting ErrFile to fd 2...
	I0127 12:11:43.388747  368957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:11:43.388963  368957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	W0127 12:11:43.389156  368957 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20317-361578/.minikube/config/config.json: open /home/jenkins/minikube-integration/20317-361578/.minikube/config/config.json: no such file or directory
	I0127 12:11:43.389808  368957 out.go:352] Setting JSON to true
	I0127 12:11:43.390756  368957 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":17643,"bootTime":1737962260,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:11:43.390875  368957 start.go:139] virtualization: kvm guest
	I0127 12:11:43.393271  368957 out.go:97] [download-only-484253] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:11:43.393437  368957 notify.go:220] Checking for updates...
	W0127 12:11:43.393453  368957 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 12:11:43.394921  368957 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:11:43.396160  368957 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:11:43.397319  368957 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 12:11:43.398502  368957 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 12:11:43.399836  368957 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 12:11:43.401860  368957 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 12:11:43.402128  368957 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:11:43.436188  368957 out.go:97] Using the kvm2 driver based on user configuration
	I0127 12:11:43.436222  368957 start.go:297] selected driver: kvm2
	I0127 12:11:43.436239  368957 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:11:43.436600  368957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:11:43.436682  368957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:11:43.451596  368957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:11:43.451646  368957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:11:43.452342  368957 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 12:11:43.452513  368957 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:11:43.452564  368957 cni.go:84] Creating CNI manager for ""
	I0127 12:11:43.452645  368957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:11:43.452656  368957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:11:43.452736  368957 start.go:340] cluster config:
	{Name:download-only-484253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-484253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:11:43.452923  368957 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:11:43.454652  368957 out.go:97] Downloading VM boot image ...
	I0127 12:11:43.454704  368957 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 12:11:57.591348  368957 out.go:97] Starting "download-only-484253" primary control-plane node in "download-only-484253" cluster
	I0127 12:11:57.591380  368957 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:11:57.750947  368957 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0127 12:11:57.750987  368957 cache.go:56] Caching tarball of preloaded images
	I0127 12:11:57.751165  368957 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0127 12:11:57.753049  368957 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 12:11:57.753072  368957 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0127 12:11:57.908834  368957 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-484253 host does not exist
	  To start a cluster, run: "minikube start -p download-only-484253"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-484253
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (17.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-356622 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-356622 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.436598035s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (17.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 12:12:34.739387  368946 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0127 12:12:34.739438  368946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-356622
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-356622: exit status 85 (64.272073ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-484253 | jenkins | v1.35.0 | 27 Jan 25 12:11 UTC |                     |
	|         | -p download-only-484253        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC | 27 Jan 25 12:12 UTC |
	| delete  | -p download-only-484253        | download-only-484253 | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC | 27 Jan 25 12:12 UTC |
	| start   | -o=json --download-only        | download-only-356622 | jenkins | v1.35.0 | 27 Jan 25 12:12 UTC |                     |
	|         | -p download-only-356622        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:12:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:12:17.343524  369244 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:12:17.343921  369244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:12:17.344038  369244 out.go:358] Setting ErrFile to fd 2...
	I0127 12:12:17.344044  369244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:12:17.344217  369244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 12:12:17.344795  369244 out.go:352] Setting JSON to true
	I0127 12:12:17.345623  369244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":17677,"bootTime":1737962260,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:12:17.345734  369244 start.go:139] virtualization: kvm guest
	I0127 12:12:17.347788  369244 out.go:97] [download-only-356622] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:12:17.347927  369244 notify.go:220] Checking for updates...
	I0127 12:12:17.349302  369244 out.go:169] MINIKUBE_LOCATION=20317
	I0127 12:12:17.350643  369244 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:12:17.351874  369244 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 12:12:17.353134  369244 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 12:12:17.354407  369244 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 12:12:17.356673  369244 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 12:12:17.356918  369244 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:12:17.388181  369244 out.go:97] Using the kvm2 driver based on user configuration
	I0127 12:12:17.388224  369244 start.go:297] selected driver: kvm2
	I0127 12:12:17.388233  369244 start.go:901] validating driver "kvm2" against <nil>
	I0127 12:12:17.388649  369244 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:12:17.388732  369244 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20317-361578/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 12:12:17.403276  369244 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 12:12:17.403328  369244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:12:17.403784  369244 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 12:12:17.403905  369244 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 12:12:17.403941  369244 cni.go:84] Creating CNI manager for ""
	I0127 12:12:17.403996  369244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0127 12:12:17.404005  369244 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 12:12:17.404063  369244 start.go:340] cluster config:
	{Name:download-only-356622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-356622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:12:17.404146  369244 iso.go:125] acquiring lock: {Name:mkc026b8dff3b6e4d3ce1210811a36aea711f2ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:12:17.405645  369244 out.go:97] Starting "download-only-356622" primary control-plane node in "download-only-356622" cluster
	I0127 12:12:17.405668  369244 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:12:18.165715  369244 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0127 12:12:18.165748  369244 cache.go:56] Caching tarball of preloaded images
	I0127 12:12:18.171771  369244 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0127 12:12:18.173542  369244 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0127 12:12:18.173557  369244 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0127 12:12:18.325691  369244 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20317-361578/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-356622 host does not exist
	  To start a cluster, run: "minikube start -p download-only-356622"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-356622
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 12:12:35.338236  368946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-799340 --alsologtostderr --binary-mirror http://127.0.0.1:41381 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-799340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-799340
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (83.29s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-363296 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-363296 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.390504209s)
helpers_test.go:175: Cleaning up "offline-crio-363296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-363296
--- PASS: TestOffline (83.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-645690
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-645690: exit status 85 (56.543234ms)

                                                
                                                
-- stdout --
	* Profile "addons-645690" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-645690"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-645690
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-645690: exit status 85 (54.685269ms)

                                                
                                                
-- stdout --
	* Profile "addons-645690" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-645690"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (209.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-645690 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-645690 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m29.831934153s)
--- PASS: TestAddons/Setup (209.83s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.69s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-645690 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-645690 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-645690 get secret gcp-auth -n new-namespace: exit status 1 (79.377127ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-645690 logs -l app=gcp-auth -n gcp-auth
I0127 12:16:06.359129  368946 retry.go:31] will retry after 1.424608585s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/01/27 12:16:05 GCP Auth Webhook started!
	2025/01/27 12:16:06 Ready to marshal response ...
	2025/01/27 12:16:06 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-645690 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (13.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-645690 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-645690 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [30dbd9e2-0420-4460-acb3-10f7edb4018c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [30dbd9e2-0420-4460-acb3-10f7edb4018c] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 13.003409505s
addons_test.go:633: (dbg) Run:  kubectl --context addons-645690 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-645690 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-645690 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (13.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.430411ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-vcc5g" [97285ffb-54b7-4f66-b39c-a22b1e7c77d3] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004665702s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-89gbc" [0e35959d-068c-4f26-8edf-27cc6aef30b4] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004436277s
addons_test.go:331: (dbg) Run:  kubectl --context addons-645690 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-645690 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-645690 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (15.204802805s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable registry --alsologtostderr -v=1: (1.01157458s)
--- PASS: TestAddons/parallel/Registry (28.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.09s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xtgqd" [7f7f0b69-6698-475a-9fac-77602bf9c5d3] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003689792s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable inspektor-gadget --alsologtostderr -v=1: (6.085441234s)
--- PASS: TestAddons/parallel/InspektorGadget (12.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I0127 12:16:29.686613  368946 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:394: metrics-server stabilized in 6.223882ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-4kg4p" [7ee8de18-dd61-4774-8716-7815448549e1] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003039272s
addons_test.go:402: (dbg) Run:  kubectl --context addons-645690 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.09506ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-645690 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-645690 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [19bb3c6d-1563-48a9-8931-f3ec013d71d8] Pending
helpers_test.go:344: "task-pv-pod" [19bb3c6d-1563-48a9-8931-f3ec013d71d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [19bb3c6d-1563-48a9-8931-f3ec013d71d8] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004111881s
addons_test.go:511: (dbg) Run:  kubectl --context addons-645690 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-645690 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-645690 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
2025/01/27 12:16:57 [DEBUG] GET http://192.168.39.68:5000
addons_test.go:521: (dbg) Run:  kubectl --context addons-645690 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-645690 delete pod task-pv-pod: (1.800619055s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-645690 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-645690 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-645690 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8f13e21c-71c7-44b3-bc6c-f0b66e7d354a] Pending
helpers_test.go:344: "task-pv-pod-restore" [8f13e21c-71c7-44b3-bc6c-f0b66e7d354a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8f13e21c-71c7-44b3-bc6c-f0b66e7d354a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.005363417s
addons_test.go:553: (dbg) Run:  kubectl --context addons-645690 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-645690 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-645690 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable volumesnapshots --alsologtostderr -v=1: (1.468289499s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.071831861s)
--- PASS: TestAddons/parallel/CSI (53.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (26.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-645690 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-645690 --alsologtostderr -v=1: (1.045094253s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-vk6zp" [f6bf04e6-cbcc-4b98-ab6c-50a8811890a5] Pending
helpers_test.go:344: "headlamp-69d78d796f-vk6zp" [f6bf04e6-cbcc-4b98-ab6c-50a8811890a5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-vk6zp" [f6bf04e6-cbcc-4b98-ab6c-50a8811890a5] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.003468153s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable headlamp --alsologtostderr -v=1: (6.026721953s)
--- PASS: TestAddons/parallel/Headlamp (26.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-xnqm4" [617b4427-9e50-4c29-8524-1f17322d958e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00308254s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (69.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-645690 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-645690 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [feb3bf95-23f0-45a8-9a04-a160744b4f7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [feb3bf95-23f0-45a8-9a04-a160744b4f7a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [feb3bf95-23f0-45a8-9a04-a160744b4f7a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 18.005396582s
addons_test.go:906: (dbg) Run:  kubectl --context addons-645690 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 ssh "cat /opt/local-path-provisioner/pvc-ec5a4462-37e5-484e-bbe3-5c2da761259b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-645690 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-645690 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.523720984s)
--- PASS: TestAddons/parallel/LocalPath (69.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sd4md" [5beaf7f5-9c24-418a-8c06-61c555ee367f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004435345s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-bwmjp" [82958c5f-4a2f-4beb-bab0-7b52bcec8ed1] Running
I0127 12:16:29.692660  368946 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 12:16:29.692682  368946 kapi.go:107] duration metric: took 6.086219ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00457445s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-645690 addons disable yakd --alsologtostderr -v=1: (5.912171416s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-645690
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-645690: (1m30.973166349s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-645690
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-645690
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-645690
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (61.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-324444 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0127 13:19:22.615379  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-324444 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m0.468267565s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-324444 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-324444 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-324444 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-324444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-324444
--- PASS: TestCertOptions (61.90s)

                                                
                                    
x
+
TestCertExpiration (311.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-180143 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-180143 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (59.273680055s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-180143 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-180143 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m10.949715121s)
helpers_test.go:175: Cleaning up "cert-expiration-180143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-180143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-180143: (1.099349392s)
--- PASS: TestCertExpiration (311.32s)

                                                
                                    
x
+
TestForceSystemdFlag (64.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-268206 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-268206 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m3.209034598s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-268206 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-268206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-268206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-268206: (1.002863602s)
--- PASS: TestForceSystemdFlag (64.42s)

                                                
                                    
x
+
TestForceSystemdEnv (71.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-953289 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-953289 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.077487731s)
helpers_test.go:175: Cleaning up "force-systemd-env-953289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-953289
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-953289: (1.06547274s)
--- PASS: TestForceSystemdEnv (71.14s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 13:16:36.098816  368946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:16:36.098953  368946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 13:16:36.128230  368946 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 13:16:36.128623  368946 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 13:16:36.128676  368946 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate924092862/001/docker-machine-driver-kvm2
I0127 13:16:36.669312  368946 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate924092862/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000014c40 gz:0xc000014c48 tar:0xc0000149e0 tar.bz2:0xc000014c00 tar.gz:0xc000014c10 tar.xz:0xc000014c20 tar.zst:0xc000014c30 tbz2:0xc000014c00 tgz:0xc000014c10 txz:0xc000014c20 tzst:0xc000014c30 xz:0xc000014c50 zip:0xc000014c60 zst:0xc000014c58] Getters:map[file:0xc00008d460 http:0xc0007d0a00 https:0xc0007d0a50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 13:16:36.669368  368946 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate924092862/001/docker-machine-driver-kvm2
I0127 13:16:40.632396  368946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 13:16:40.632483  368946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 13:16:40.666321  368946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 13:16:40.666357  368946 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 13:16:40.666436  368946 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 13:16:40.666471  368946 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate924092862/002/docker-machine-driver-kvm2
I0127 13:16:40.983286  368946 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate924092862/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc000014c40 gz:0xc000014c48 tar:0xc0000149e0 tar.bz2:0xc000014c00 tar.gz:0xc000014c10 tar.xz:0xc000014c20 tar.zst:0xc000014c30 tbz2:0xc000014c00 tgz:0xc000014c10 txz:0xc000014c20 tzst:0xc000014c30 xz:0xc000014c50 zip:0xc000014c60 zst:0xc000014c58] Getters:map[file:0xc0006833c0 http:0xc000778c30 https:0xc000778c80] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 13:16:40.983349  368946 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate924092862/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (8.10s)

                                                
                                    
x
+
TestErrorSpam/setup (43.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-057940 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-057940 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-057940 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-057940 --driver=kvm2  --container-runtime=crio: (43.610934927s)
--- PASS: TestErrorSpam/setup (43.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 stop: (2.318566313s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 stop: (1.894477489s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-057940 --log_dir /tmp/nospam-057940 stop: (1.490703084s)
--- PASS: TestErrorSpam/stop (5.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20317-361578/.minikube/files/etc/test/nested/copy/368946/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223147 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-223147 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.856669064s)
--- PASS: TestFunctional/serial/StartWithProxy (54.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 12:23:08.018895  368946 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223147 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-223147 --alsologtostderr -v=8: (36.797842588s)
functional_test.go:663: soft start took 36.798688121s for "functional-223147" cluster.
I0127 12:23:44.817151  368946 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (36.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-223147 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 cache add registry.k8s.io/pause:3.1: (1.119508585s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 cache add registry.k8s.io/pause:3.3: (1.125297811s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 cache add registry.k8s.io/pause:latest: (1.13138034s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-223147 /tmp/TestFunctionalserialCacheCmdcacheadd_local1482998719/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cache add minikube-local-cache-test:functional-223147
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 cache add minikube-local-cache-test:functional-223147: (2.474521108s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cache delete minikube-local-cache-test:functional-223147
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-223147
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.97182ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 kubectl -- --context functional-223147 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-223147 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223147 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-223147 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.4093329s)
functional_test.go:761: restart took 36.409490417s for "functional-223147" cluster.
I0127 12:24:29.788982  368946 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (36.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-223147 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 logs: (1.531510018s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 logs --file /tmp/TestFunctionalserialLogsFileCmd3626456490/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 logs --file /tmp/TestFunctionalserialLogsFileCmd3626456490/001/logs.txt: (1.415501362s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-223147 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-223147
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-223147: exit status 115 (381.74545ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.195:31447 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-223147 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-223147 delete -f testdata/invalidsvc.yaml: (1.825827083s)
--- PASS: TestFunctional/serial/InvalidService (5.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 config get cpus: exit status 14 (73.428597ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 config get cpus: exit status 14 (63.547883ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-223147 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-223147 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 376776: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223147 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-223147 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.943449ms)

                                                
                                                
-- stdout --
	* [functional-223147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:24:38.879249  376380 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:24:38.879361  376380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:24:38.879373  376380 out.go:358] Setting ErrFile to fd 2...
	I0127 12:24:38.879377  376380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:24:38.879588  376380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 12:24:38.880143  376380 out.go:352] Setting JSON to false
	I0127 12:24:38.881224  376380 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":18419,"bootTime":1737962260,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:24:38.881370  376380 start.go:139] virtualization: kvm guest
	I0127 12:24:38.883344  376380 out.go:177] * [functional-223147] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 12:24:38.884655  376380 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:24:38.884685  376380 notify.go:220] Checking for updates...
	I0127 12:24:38.886843  376380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:24:38.888081  376380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 12:24:38.889220  376380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 12:24:38.890471  376380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:24:38.891734  376380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:24:38.893486  376380 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:24:38.894100  376380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:24:38.894159  376380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:24:38.911787  376380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33001
	I0127 12:24:38.912204  376380 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:24:38.912758  376380 main.go:141] libmachine: Using API Version  1
	I0127 12:24:38.912773  376380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:24:38.913040  376380 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:24:38.913203  376380 main.go:141] libmachine: (functional-223147) Calling .DriverName
	I0127 12:24:38.913405  376380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:24:38.913675  376380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:24:38.913706  376380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:24:38.932638  376380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0127 12:24:38.933122  376380 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:24:38.933662  376380 main.go:141] libmachine: Using API Version  1
	I0127 12:24:38.933680  376380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:24:38.934015  376380 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:24:38.934235  376380 main.go:141] libmachine: (functional-223147) Calling .DriverName
	I0127 12:24:38.973008  376380 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 12:24:38.974197  376380 start.go:297] selected driver: kvm2
	I0127 12:24:38.974216  376380 start.go:901] validating driver "kvm2" against &{Name:functional-223147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-223147 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:24:38.974352  376380 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:24:38.976765  376380 out.go:201] 
	W0127 12:24:38.978139  376380 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 12:24:38.979419  376380 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223147 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-223147 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-223147 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.895334ms)

                                                
                                                
-- stdout --
	* [functional-223147] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:24:38.734867  376318 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:24:38.734999  376318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:24:38.735011  376318 out.go:358] Setting ErrFile to fd 2...
	I0127 12:24:38.735018  376318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:24:38.735325  376318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 12:24:38.735839  376318 out.go:352] Setting JSON to false
	I0127 12:24:38.736817  376318 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":18419,"bootTime":1737962260,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 12:24:38.736976  376318 start.go:139] virtualization: kvm guest
	I0127 12:24:38.739310  376318 out.go:177] * [functional-223147] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 12:24:38.740996  376318 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 12:24:38.741029  376318 notify.go:220] Checking for updates...
	I0127 12:24:38.743333  376318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:24:38.744574  376318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 12:24:38.745882  376318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 12:24:38.747060  376318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 12:24:38.748224  376318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:24:38.750073  376318 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:24:38.750764  376318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:24:38.750822  376318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:24:38.767028  376318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32791
	I0127 12:24:38.767370  376318 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:24:38.767863  376318 main.go:141] libmachine: Using API Version  1
	I0127 12:24:38.767882  376318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:24:38.768180  376318 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:24:38.768365  376318 main.go:141] libmachine: (functional-223147) Calling .DriverName
	I0127 12:24:38.768576  376318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:24:38.768858  376318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:24:38.768902  376318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:24:38.785173  376318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33751
	I0127 12:24:38.785519  376318 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:24:38.785923  376318 main.go:141] libmachine: Using API Version  1
	I0127 12:24:38.785944  376318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:24:38.786348  376318 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:24:38.786604  376318 main.go:141] libmachine: (functional-223147) Calling .DriverName
	I0127 12:24:38.820423  376318 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 12:24:38.821772  376318 start.go:297] selected driver: kvm2
	I0127 12:24:38.821790  376318 start.go:901] validating driver "kvm2" against &{Name:functional-223147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-223147 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:24:38.821931  376318 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:24:38.823958  376318 out.go:201] 
	W0127 12:24:38.825094  376318 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 12:24:38.826275  376318 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-223147 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-223147 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-5mtng" [4c75546f-2b64-4aba-93ea-020f6b70ea08] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-5mtng" [4c75546f-2b64-4aba-93ea-020f6b70ea08] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005057396s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.195:32523
functional_test.go:1675: http://192.168.39.195:32523: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-5mtng

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.195:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.195:32523
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8195e58c-0710-4435-be99-d602ff1a7db1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003616255s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-223147 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-223147 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-223147 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-223147 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4ca4c3ba-eb4c-42fe-be36-31d107cf2a77] Pending
helpers_test.go:344: "sp-pod" [4ca4c3ba-eb4c-42fe-be36-31d107cf2a77] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4ca4c3ba-eb4c-42fe-be36-31d107cf2a77] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.004275306s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-223147 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-223147 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-223147 delete -f testdata/storage-provisioner/pod.yaml: (3.769113398s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-223147 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b27c21c8-4856-430b-bf73-d3d99de6b6fd] Pending
helpers_test.go:344: "sp-pod" [b27c21c8-4856-430b-bf73-d3d99de6b6fd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b27c21c8-4856-430b-bf73-d3d99de6b6fd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003719485s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-223147 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.86s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh -n functional-223147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cp functional-223147:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4221177019/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh -n functional-223147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh -n functional-223147 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-223147 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-bxqmg" [5f569e8f-e01d-4939-b9e9-36b0348a543e] Pending
helpers_test.go:344: "mysql-58ccfd96bb-bxqmg" [5f569e8f-e01d-4939-b9e9-36b0348a543e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-bxqmg" [5f569e8f-e01d-4939-b9e9-36b0348a543e] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.004488837s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-223147 exec mysql-58ccfd96bb-bxqmg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-223147 exec mysql-58ccfd96bb-bxqmg -- mysql -ppassword -e "show databases;": exit status 1 (132.467107ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:25:29.125951  368946 retry.go:31] will retry after 803.230342ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-223147 exec mysql-58ccfd96bb-bxqmg -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-223147 exec mysql-58ccfd96bb-bxqmg -- mysql -ppassword -e "show databases;": exit status 1 (143.684462ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 12:25:30.073646  368946 retry.go:31] will retry after 1.069732333s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-223147 exec mysql-58ccfd96bb-bxqmg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/368946/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo cat /etc/test/nested/copy/368946/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/368946.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo cat /etc/ssl/certs/368946.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/368946.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo cat /usr/share/ca-certificates/368946.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3689462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo cat /etc/ssl/certs/3689462.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3689462.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo cat /usr/share/ca-certificates/3689462.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-223147 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh "sudo systemctl is-active docker": exit status 1 (190.189356ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh "sudo systemctl is-active containerd": exit status 1 (193.261807ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdany-port1558740932/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737980678287654476" to /tmp/TestFunctionalparallelMountCmdany-port1558740932/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737980678287654476" to /tmp/TestFunctionalparallelMountCmdany-port1558740932/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737980678287654476" to /tmp/TestFunctionalparallelMountCmdany-port1558740932/001/test-1737980678287654476
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.917361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:24:38.526925  368946 retry.go:31] will retry after 639.240791ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 12:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 12:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 12:24 test-1737980678287654476
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh cat /mount-9p/test-1737980678287654476
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-223147 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a20aee90-9479-4c3c-8e80-a58b8b6b7feb] Pending
helpers_test.go:344: "busybox-mount" [a20aee90-9479-4c3c-8e80-a58b8b6b7feb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a20aee90-9479-4c3c-8e80-a58b8b6b7feb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a20aee90-9479-4c3c-8e80-a58b8b6b7feb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.005736201s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-223147 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdany-port1558740932/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-223147 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-223147 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-m945k" [79ec7253-a586-4b4e-a774-35b5c1e61f2b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-m945k" [79ec7253-a586-4b4e-a774-35b5c1e61f2b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.360198827s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdspecific-port4035575572/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (188.642892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:24:52.223692  368946 retry.go:31] will retry after 402.589923ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdspecific-port4035575572/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh "sudo umount -f /mount-9p": exit status 1 (191.248546ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-223147 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdspecific-port4035575572/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup196095724/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup196095724/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup196095724/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T" /mount1: exit status 1 (232.338715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 12:24:53.883034  368946 retry.go:31] will retry after 593.429308ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-223147 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup196095724/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup196095724/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-223147 /tmp/TestFunctionalparallelMountCmdVerifyCleanup196095724/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223147 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-223147
localhost/kicbase/echo-server:functional-223147
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223147 image ls --format short --alsologtostderr:
I0127 12:25:09.870808  378519 out.go:345] Setting OutFile to fd 1 ...
I0127 12:25:09.870928  378519 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:09.870938  378519 out.go:358] Setting ErrFile to fd 2...
I0127 12:25:09.870945  378519 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:09.871181  378519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
I0127 12:25:09.871840  378519 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:09.871975  378519 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:09.872378  378519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:09.872433  378519 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:09.889648  378519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32979
I0127 12:25:09.890199  378519 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:09.890841  378519 main.go:141] libmachine: Using API Version  1
I0127 12:25:09.890863  378519 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:09.891479  378519 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:09.891703  378519 main.go:141] libmachine: (functional-223147) Calling .GetState
I0127 12:25:09.893577  378519 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:09.893624  378519 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:09.909248  378519 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
I0127 12:25:09.909747  378519 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:09.910300  378519 main.go:141] libmachine: Using API Version  1
I0127 12:25:09.910320  378519 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:09.910655  378519 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:09.910911  378519 main.go:141] libmachine: (functional-223147) Calling .DriverName
I0127 12:25:09.911107  378519 ssh_runner.go:195] Run: systemctl --version
I0127 12:25:09.911135  378519 main.go:141] libmachine: (functional-223147) Calling .GetSSHHostname
I0127 12:25:09.913970  378519 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:09.914446  378519 main.go:141] libmachine: (functional-223147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:6f:ad", ip: ""} in network mk-functional-223147: {Iface:virbr1 ExpiryTime:2025-01-27 13:22:28 +0000 UTC Type:0 Mac:52:54:00:59:6f:ad Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-223147 Clientid:01:52:54:00:59:6f:ad}
I0127 12:25:09.914468  378519 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined IP address 192.168.39.195 and MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:09.914659  378519 main.go:141] libmachine: (functional-223147) Calling .GetSSHPort
I0127 12:25:09.914813  378519 main.go:141] libmachine: (functional-223147) Calling .GetSSHKeyPath
I0127 12:25:09.914942  378519 main.go:141] libmachine: (functional-223147) Calling .GetSSHUsername
I0127 12:25:09.915032  378519 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/functional-223147/id_rsa Username:docker}
I0127 12:25:10.000441  378519 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:25:10.042747  378519 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.042765  378519 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.043028  378519 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.043048  378519 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:25:10.043070  378519 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.043077  378519 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.043079  378519 main.go:141] libmachine: (functional-223147) DBG | Closing plugin on server side
I0127 12:25:10.043354  378519 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.043385  378519 main.go:141] libmachine: (functional-223147) DBG | Closing plugin on server side
I0127 12:25:10.043394  378519 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223147 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-223147  | 93024d0ff1f69 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| localhost/kicbase/echo-server           | functional-223147  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223147 image ls --format table --alsologtostderr:
I0127 12:25:10.390737  378657 out.go:345] Setting OutFile to fd 1 ...
I0127 12:25:10.390833  378657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:10.390841  378657 out.go:358] Setting ErrFile to fd 2...
I0127 12:25:10.390846  378657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:10.391012  378657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
I0127 12:25:10.391600  378657 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:10.391707  378657 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:10.392109  378657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:10.392167  378657 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:10.407444  378657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
I0127 12:25:10.407922  378657 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:10.408447  378657 main.go:141] libmachine: Using API Version  1
I0127 12:25:10.408464  378657 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:10.408821  378657 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:10.409103  378657 main.go:141] libmachine: (functional-223147) Calling .GetState
I0127 12:25:10.411225  378657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:10.411274  378657 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:10.426146  378657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
I0127 12:25:10.426569  378657 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:10.427069  378657 main.go:141] libmachine: Using API Version  1
I0127 12:25:10.427093  378657 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:10.427466  378657 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:10.427705  378657 main.go:141] libmachine: (functional-223147) Calling .DriverName
I0127 12:25:10.427891  378657 ssh_runner.go:195] Run: systemctl --version
I0127 12:25:10.427922  378657 main.go:141] libmachine: (functional-223147) Calling .GetSSHHostname
I0127 12:25:10.431194  378657 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:10.431678  378657 main.go:141] libmachine: (functional-223147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:6f:ad", ip: ""} in network mk-functional-223147: {Iface:virbr1 ExpiryTime:2025-01-27 13:22:28 +0000 UTC Type:0 Mac:52:54:00:59:6f:ad Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-223147 Clientid:01:52:54:00:59:6f:ad}
I0127 12:25:10.431696  378657 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined IP address 192.168.39.195 and MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:10.431872  378657 main.go:141] libmachine: (functional-223147) Calling .GetSSHPort
I0127 12:25:10.432058  378657 main.go:141] libmachine: (functional-223147) Calling .GetSSHKeyPath
I0127 12:25:10.432212  378657 main.go:141] libmachine: (functional-223147) Calling .GetSSHUsername
I0127 12:25:10.432406  378657 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/functional-223147/id_rsa Username:docker}
I0127 12:25:10.511727  378657 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:25:10.562598  378657 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.562618  378657 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.562968  378657 main.go:141] libmachine: (functional-223147) DBG | Closing plugin on server side
I0127 12:25:10.562976  378657 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.562995  378657 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:25:10.563012  378657 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.563023  378657 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.563285  378657 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.563302  378657 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223147 image ls --format json --alsologtostderr:
[{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"93024d0ff1f69227e88969fa047d60b368cca1574ceb5f78447dcb9eaa2286df","repoDigests":["localhost/minikube-local-cache-test@sha256:56f3cac33905c146945b6a4299b7f953c9a3f821593c08cf4d5285f561953c47"],"repoTags":["localhost/minikube-local-cache-test:functional-223147"],"size":"3330"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:
1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae
1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-223147"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced9
24d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/met
rics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf
583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","do
cker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223147 image ls --format json --alsologtostderr:
I0127 12:25:10.173043  378598 out.go:345] Setting OutFile to fd 1 ...
I0127 12:25:10.173158  378598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:10.173169  378598 out.go:358] Setting ErrFile to fd 2...
I0127 12:25:10.173174  378598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:10.173407  378598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
I0127 12:25:10.174034  378598 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:10.174158  378598 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:10.174575  378598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:10.174648  378598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:10.192017  378598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
I0127 12:25:10.192522  378598 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:10.193167  378598 main.go:141] libmachine: Using API Version  1
I0127 12:25:10.193200  378598 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:10.193549  378598 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:10.193729  378598 main.go:141] libmachine: (functional-223147) Calling .GetState
I0127 12:25:10.195573  378598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:10.195613  378598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:10.210103  378598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
I0127 12:25:10.210490  378598 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:10.210903  378598 main.go:141] libmachine: Using API Version  1
I0127 12:25:10.210922  378598 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:10.211279  378598 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:10.211487  378598 main.go:141] libmachine: (functional-223147) Calling .DriverName
I0127 12:25:10.211690  378598 ssh_runner.go:195] Run: systemctl --version
I0127 12:25:10.211718  378598 main.go:141] libmachine: (functional-223147) Calling .GetSSHHostname
I0127 12:25:10.214356  378598 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:10.214779  378598 main.go:141] libmachine: (functional-223147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:6f:ad", ip: ""} in network mk-functional-223147: {Iface:virbr1 ExpiryTime:2025-01-27 13:22:28 +0000 UTC Type:0 Mac:52:54:00:59:6f:ad Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-223147 Clientid:01:52:54:00:59:6f:ad}
I0127 12:25:10.214822  378598 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined IP address 192.168.39.195 and MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:10.214930  378598 main.go:141] libmachine: (functional-223147) Calling .GetSSHPort
I0127 12:25:10.215139  378598 main.go:141] libmachine: (functional-223147) Calling .GetSSHKeyPath
I0127 12:25:10.215248  378598 main.go:141] libmachine: (functional-223147) Calling .GetSSHUsername
I0127 12:25:10.215391  378598 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/functional-223147/id_rsa Username:docker}
I0127 12:25:10.293834  378598 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:25:10.337915  378598 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.337941  378598 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.338168  378598 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.338182  378598 main.go:141] libmachine: (functional-223147) DBG | Closing plugin on server side
I0127 12:25:10.338188  378598 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:25:10.338202  378598 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.338218  378598 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.338441  378598 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.338463  378598 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:25:10.338470  378598 main.go:141] libmachine: (functional-223147) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223147 image ls --format yaml --alsologtostderr:
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-223147
size: "4943877"
- id: 93024d0ff1f69227e88969fa047d60b368cca1574ceb5f78447dcb9eaa2286df
repoDigests:
- localhost/minikube-local-cache-test@sha256:56f3cac33905c146945b6a4299b7f953c9a3f821593c08cf4d5285f561953c47
repoTags:
- localhost/minikube-local-cache-test:functional-223147
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223147 image ls --format yaml --alsologtostderr:
I0127 12:25:09.939652  378551 out.go:345] Setting OutFile to fd 1 ...
I0127 12:25:09.939763  378551 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:09.939773  378551 out.go:358] Setting ErrFile to fd 2...
I0127 12:25:09.939778  378551 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:09.939967  378551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
I0127 12:25:09.940606  378551 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:09.940722  378551 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:09.941101  378551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:09.941181  378551 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:09.956331  378551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
I0127 12:25:09.956728  378551 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:09.957310  378551 main.go:141] libmachine: Using API Version  1
I0127 12:25:09.957339  378551 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:09.957694  378551 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:09.957894  378551 main.go:141] libmachine: (functional-223147) Calling .GetState
I0127 12:25:09.959520  378551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:09.959563  378551 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:09.973880  378551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
I0127 12:25:09.974236  378551 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:09.974702  378551 main.go:141] libmachine: Using API Version  1
I0127 12:25:09.974724  378551 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:09.975017  378551 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:09.975212  378551 main.go:141] libmachine: (functional-223147) Calling .DriverName
I0127 12:25:09.975406  378551 ssh_runner.go:195] Run: systemctl --version
I0127 12:25:09.975434  378551 main.go:141] libmachine: (functional-223147) Calling .GetSSHHostname
I0127 12:25:09.977814  378551 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:09.978221  378551 main.go:141] libmachine: (functional-223147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:6f:ad", ip: ""} in network mk-functional-223147: {Iface:virbr1 ExpiryTime:2025-01-27 13:22:28 +0000 UTC Type:0 Mac:52:54:00:59:6f:ad Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-223147 Clientid:01:52:54:00:59:6f:ad}
I0127 12:25:09.978251  378551 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined IP address 192.168.39.195 and MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:09.978391  378551 main.go:141] libmachine: (functional-223147) Calling .GetSSHPort
I0127 12:25:09.978566  378551 main.go:141] libmachine: (functional-223147) Calling .GetSSHKeyPath
I0127 12:25:09.978739  378551 main.go:141] libmachine: (functional-223147) Calling .GetSSHUsername
I0127 12:25:09.978892  378551 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/functional-223147/id_rsa Username:docker}
I0127 12:25:10.069205  378551 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 12:25:10.115841  378551 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.115858  378551 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.116114  378551 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.116134  378551 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:25:10.116169  378551 main.go:141] libmachine: Making call to close driver server
I0127 12:25:10.116182  378551 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:10.116397  378551 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:10.116422  378551 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:25:10.116428  378551 main.go:141] libmachine: (functional-223147) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-223147 ssh pgrep buildkitd: exit status 1 (198.50576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image build -t localhost/my-image:functional-223147 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 image build -t localhost/my-image:functional-223147 testdata/build --alsologtostderr: (6.617216032s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-223147 image build -t localhost/my-image:functional-223147 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 344667ecd0c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-223147
--> dcb8edd9056
Successfully tagged localhost/my-image:functional-223147
dcb8edd905674420aaa42fc4b7bc88934f0642bd2f28669a1bfaf9448c974587
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-223147 image build -t localhost/my-image:functional-223147 testdata/build --alsologtostderr:
I0127 12:25:10.294693  378633 out.go:345] Setting OutFile to fd 1 ...
I0127 12:25:10.295451  378633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:10.295466  378633 out.go:358] Setting ErrFile to fd 2...
I0127 12:25:10.295470  378633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 12:25:10.295667  378633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
I0127 12:25:10.296324  378633 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:10.296869  378633 config.go:182] Loaded profile config "functional-223147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0127 12:25:10.297206  378633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:10.297244  378633 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:10.312944  378633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
I0127 12:25:10.313355  378633 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:10.313836  378633 main.go:141] libmachine: Using API Version  1
I0127 12:25:10.313855  378633 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:10.314242  378633 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:10.314442  378633 main.go:141] libmachine: (functional-223147) Calling .GetState
I0127 12:25:10.316201  378633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0127 12:25:10.316247  378633 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 12:25:10.331317  378633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
I0127 12:25:10.331713  378633 main.go:141] libmachine: () Calling .GetVersion
I0127 12:25:10.332179  378633 main.go:141] libmachine: Using API Version  1
I0127 12:25:10.332193  378633 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 12:25:10.332493  378633 main.go:141] libmachine: () Calling .GetMachineName
I0127 12:25:10.332676  378633 main.go:141] libmachine: (functional-223147) Calling .DriverName
I0127 12:25:10.332871  378633 ssh_runner.go:195] Run: systemctl --version
I0127 12:25:10.332895  378633 main.go:141] libmachine: (functional-223147) Calling .GetSSHHostname
I0127 12:25:10.335807  378633 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:10.336223  378633 main.go:141] libmachine: (functional-223147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:6f:ad", ip: ""} in network mk-functional-223147: {Iface:virbr1 ExpiryTime:2025-01-27 13:22:28 +0000 UTC Type:0 Mac:52:54:00:59:6f:ad Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-223147 Clientid:01:52:54:00:59:6f:ad}
I0127 12:25:10.336266  378633 main.go:141] libmachine: (functional-223147) DBG | domain functional-223147 has defined IP address 192.168.39.195 and MAC address 52:54:00:59:6f:ad in network mk-functional-223147
I0127 12:25:10.336363  378633 main.go:141] libmachine: (functional-223147) Calling .GetSSHPort
I0127 12:25:10.336561  378633 main.go:141] libmachine: (functional-223147) Calling .GetSSHKeyPath
I0127 12:25:10.336742  378633 main.go:141] libmachine: (functional-223147) Calling .GetSSHUsername
I0127 12:25:10.336924  378633 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/functional-223147/id_rsa Username:docker}
I0127 12:25:10.413796  378633 build_images.go:161] Building image from path: /tmp/build.2175643602.tar
I0127 12:25:10.413851  378633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 12:25:10.425043  378633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2175643602.tar
I0127 12:25:10.430356  378633 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2175643602.tar: stat -c "%s %y" /var/lib/minikube/build/build.2175643602.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2175643602.tar': No such file or directory
I0127 12:25:10.430392  378633 ssh_runner.go:362] scp /tmp/build.2175643602.tar --> /var/lib/minikube/build/build.2175643602.tar (3072 bytes)
I0127 12:25:10.457612  378633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2175643602
I0127 12:25:10.477881  378633 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2175643602 -xf /var/lib/minikube/build/build.2175643602.tar
I0127 12:25:10.487932  378633 crio.go:315] Building image: /var/lib/minikube/build/build.2175643602
I0127 12:25:10.487988  378633 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-223147 /var/lib/minikube/build/build.2175643602 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0127 12:25:16.825122  378633 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-223147 /var/lib/minikube/build/build.2175643602 --cgroup-manager=cgroupfs: (6.337096663s)
I0127 12:25:16.825217  378633 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2175643602
I0127 12:25:16.841003  378633 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2175643602.tar
I0127 12:25:16.857830  378633 build_images.go:217] Built localhost/my-image:functional-223147 from /tmp/build.2175643602.tar
I0127 12:25:16.857873  378633 build_images.go:133] succeeded building to: functional-223147
I0127 12:25:16.857880  378633 build_images.go:134] failed building to: 
I0127 12:25:16.857910  378633 main.go:141] libmachine: Making call to close driver server
I0127 12:25:16.857926  378633 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:16.858281  378633 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:16.858301  378633 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 12:25:16.858332  378633 main.go:141] libmachine: (functional-223147) DBG | Closing plugin on server side
I0127 12:25:16.858403  378633 main.go:141] libmachine: Making call to close driver server
I0127 12:25:16.858427  378633 main.go:141] libmachine: (functional-223147) Calling .Close
I0127 12:25:16.858698  378633 main.go:141] libmachine: Successfully made call to close driver server
I0127 12:25:16.858722  378633 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.41512421s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-223147
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image load --daemon kicbase/echo-server:functional-223147 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 image load --daemon kicbase/echo-server:functional-223147 --alsologtostderr: (2.065429545s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 service list -o json
functional_test.go:1494: Took "851.853416ms" to run "out/minikube-linux-amd64 -p functional-223147 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image load --daemon kicbase/echo-server:functional-223147 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.195:32008
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.16806481s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-223147
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image load --daemon kicbase/echo-server:functional-223147 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.195:32008
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image save kicbase/echo-server:functional-223147 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 image save kicbase/echo-server:functional-223147 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.115349316s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image rm kicbase/echo-server:functional-223147 --alsologtostderr
2025/01/27 12:25:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-223147 image rm kicbase/echo-server:functional-223147 --alsologtostderr: (1.267638323s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "326.259785ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "56.145587ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "281.465997ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.134749ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-223147
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-223147 image save --daemon kicbase/echo-server:functional-223147 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-223147
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-223147
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-223147
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-223147
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-965156 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 12:26:08.031524  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:08.037961  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:08.049349  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:08.070820  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:08.112233  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:08.193712  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:08.355299  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:08.677040  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:09.319175  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:10.600961  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:13.162692  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:18.284335  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:28.526504  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:26:49.007958  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:27:29.969843  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:28:51.891872  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-965156 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.263173858s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-965156 -- rollout status deployment/busybox: (8.687292375s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-5vcmk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-7qlc8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-zg2rx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-5vcmk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-7qlc8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-zg2rx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-5vcmk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-7qlc8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-zg2rx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-5vcmk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-5vcmk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-7qlc8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-7qlc8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-zg2rx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965156 -- exec busybox-58667487b6-zg2rx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-965156 -v=7 --alsologtostderr
E0127 12:29:39.547745  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:39.554165  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:39.565524  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:39.586895  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:39.628294  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:39.709827  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:39.871389  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:40.193355  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:40.834697  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:42.116735  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:44.678242  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:29:49.800469  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:30:00.042184  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-965156 -v=7 --alsologtostderr: (56.994166428s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-965156 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp testdata/cp-test.txt ha-965156:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile802280223/001/cp-test_ha-965156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156:/home/docker/cp-test.txt ha-965156-m02:/home/docker/cp-test_ha-965156_ha-965156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test_ha-965156_ha-965156-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156:/home/docker/cp-test.txt ha-965156-m03:/home/docker/cp-test_ha-965156_ha-965156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test_ha-965156_ha-965156-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156:/home/docker/cp-test.txt ha-965156-m04:/home/docker/cp-test_ha-965156_ha-965156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test_ha-965156_ha-965156-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp testdata/cp-test.txt ha-965156-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile802280223/001/cp-test_ha-965156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m02:/home/docker/cp-test.txt ha-965156:/home/docker/cp-test_ha-965156-m02_ha-965156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test_ha-965156-m02_ha-965156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m02:/home/docker/cp-test.txt ha-965156-m03:/home/docker/cp-test_ha-965156-m02_ha-965156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test_ha-965156-m02_ha-965156-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m02:/home/docker/cp-test.txt ha-965156-m04:/home/docker/cp-test_ha-965156-m02_ha-965156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test_ha-965156-m02_ha-965156-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp testdata/cp-test.txt ha-965156-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile802280223/001/cp-test_ha-965156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m03:/home/docker/cp-test.txt ha-965156:/home/docker/cp-test_ha-965156-m03_ha-965156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test_ha-965156-m03_ha-965156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m03:/home/docker/cp-test.txt ha-965156-m02:/home/docker/cp-test_ha-965156-m03_ha-965156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test_ha-965156-m03_ha-965156-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m03:/home/docker/cp-test.txt ha-965156-m04:/home/docker/cp-test_ha-965156-m03_ha-965156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test_ha-965156-m03_ha-965156-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp testdata/cp-test.txt ha-965156-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile802280223/001/cp-test_ha-965156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m04:/home/docker/cp-test.txt ha-965156:/home/docker/cp-test_ha-965156-m04_ha-965156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156 "sudo cat /home/docker/cp-test_ha-965156-m04_ha-965156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m04:/home/docker/cp-test.txt ha-965156-m02:/home/docker/cp-test_ha-965156-m04_ha-965156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m02 "sudo cat /home/docker/cp-test_ha-965156-m04_ha-965156-m02.txt"
E0127 12:30:20.523788  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 cp ha-965156-m04:/home/docker/cp-test.txt ha-965156-m03:/home/docker/cp-test_ha-965156-m04_ha-965156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 ssh -n ha-965156-m03 "sudo cat /home/docker/cp-test_ha-965156-m04_ha-965156-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 node stop m02 -v=7 --alsologtostderr
E0127 12:31:01.485970  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:08.030729  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:31:35.733932  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-965156 node stop m02 -v=7 --alsologtostderr: (1m30.991669171s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr: exit status 7 (648.627603ms)

                                                
                                                
-- stdout --
	ha-965156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-965156-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-965156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-965156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:31:52.318057  383443 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:31:52.318187  383443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:52.318199  383443 out.go:358] Setting ErrFile to fd 2...
	I0127 12:31:52.318206  383443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:31:52.318399  383443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 12:31:52.318595  383443 out.go:352] Setting JSON to false
	I0127 12:31:52.318628  383443 mustload.go:65] Loading cluster: ha-965156
	I0127 12:31:52.318745  383443 notify.go:220] Checking for updates...
	I0127 12:31:52.319040  383443 config.go:182] Loaded profile config "ha-965156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:31:52.319063  383443 status.go:174] checking status of ha-965156 ...
	I0127 12:31:52.319486  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.319528  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.335163  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0127 12:31:52.335561  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.336332  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.336358  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.336739  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.336999  383443 main.go:141] libmachine: (ha-965156) Calling .GetState
	I0127 12:31:52.338409  383443 status.go:371] ha-965156 host status = "Running" (err=<nil>)
	I0127 12:31:52.338425  383443 host.go:66] Checking if "ha-965156" exists ...
	I0127 12:31:52.338760  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.338819  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.353358  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0127 12:31:52.353800  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.354317  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.354349  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.354701  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.354874  383443 main.go:141] libmachine: (ha-965156) Calling .GetIP
	I0127 12:31:52.357474  383443 main.go:141] libmachine: (ha-965156) DBG | domain ha-965156 has defined MAC address 52:54:00:64:b1:a6 in network mk-ha-965156
	I0127 12:31:52.357886  383443 main.go:141] libmachine: (ha-965156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b1:a6", ip: ""} in network mk-ha-965156: {Iface:virbr1 ExpiryTime:2025-01-27 13:25:47 +0000 UTC Type:0 Mac:52:54:00:64:b1:a6 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-965156 Clientid:01:52:54:00:64:b1:a6}
	I0127 12:31:52.357910  383443 main.go:141] libmachine: (ha-965156) DBG | domain ha-965156 has defined IP address 192.168.39.173 and MAC address 52:54:00:64:b1:a6 in network mk-ha-965156
	I0127 12:31:52.357991  383443 host.go:66] Checking if "ha-965156" exists ...
	I0127 12:31:52.358274  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.358312  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.372861  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0127 12:31:52.373259  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.373758  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.373783  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.374069  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.374245  383443 main.go:141] libmachine: (ha-965156) Calling .DriverName
	I0127 12:31:52.374442  383443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:31:52.374487  383443 main.go:141] libmachine: (ha-965156) Calling .GetSSHHostname
	I0127 12:31:52.377256  383443 main.go:141] libmachine: (ha-965156) DBG | domain ha-965156 has defined MAC address 52:54:00:64:b1:a6 in network mk-ha-965156
	I0127 12:31:52.377756  383443 main.go:141] libmachine: (ha-965156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:b1:a6", ip: ""} in network mk-ha-965156: {Iface:virbr1 ExpiryTime:2025-01-27 13:25:47 +0000 UTC Type:0 Mac:52:54:00:64:b1:a6 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-965156 Clientid:01:52:54:00:64:b1:a6}
	I0127 12:31:52.377783  383443 main.go:141] libmachine: (ha-965156) DBG | domain ha-965156 has defined IP address 192.168.39.173 and MAC address 52:54:00:64:b1:a6 in network mk-ha-965156
	I0127 12:31:52.377951  383443 main.go:141] libmachine: (ha-965156) Calling .GetSSHPort
	I0127 12:31:52.378142  383443 main.go:141] libmachine: (ha-965156) Calling .GetSSHKeyPath
	I0127 12:31:52.378311  383443 main.go:141] libmachine: (ha-965156) Calling .GetSSHUsername
	I0127 12:31:52.378433  383443 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/ha-965156/id_rsa Username:docker}
	I0127 12:31:52.468682  383443 ssh_runner.go:195] Run: systemctl --version
	I0127 12:31:52.476040  383443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:31:52.492853  383443 kubeconfig.go:125] found "ha-965156" server: "https://192.168.39.254:8443"
	I0127 12:31:52.492885  383443 api_server.go:166] Checking apiserver status ...
	I0127 12:31:52.492915  383443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:52.509953  383443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0127 12:31:52.521874  383443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:31:52.521927  383443 ssh_runner.go:195] Run: ls
	I0127 12:31:52.526576  383443 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 12:31:52.531806  383443 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 12:31:52.531826  383443 status.go:463] ha-965156 apiserver status = Running (err=<nil>)
	I0127 12:31:52.531835  383443 status.go:176] ha-965156 status: &{Name:ha-965156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:31:52.531853  383443 status.go:174] checking status of ha-965156-m02 ...
	I0127 12:31:52.532153  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.532191  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.547677  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0127 12:31:52.548042  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.548577  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.548597  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.548891  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.549115  383443 main.go:141] libmachine: (ha-965156-m02) Calling .GetState
	I0127 12:31:52.550759  383443 status.go:371] ha-965156-m02 host status = "Stopped" (err=<nil>)
	I0127 12:31:52.550772  383443 status.go:384] host is not running, skipping remaining checks
	I0127 12:31:52.550777  383443 status.go:176] ha-965156-m02 status: &{Name:ha-965156-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:31:52.550794  383443 status.go:174] checking status of ha-965156-m03 ...
	I0127 12:31:52.551068  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.551107  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.565962  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0127 12:31:52.566343  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.566838  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.566853  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.567174  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.567376  383443 main.go:141] libmachine: (ha-965156-m03) Calling .GetState
	I0127 12:31:52.568785  383443 status.go:371] ha-965156-m03 host status = "Running" (err=<nil>)
	I0127 12:31:52.568799  383443 host.go:66] Checking if "ha-965156-m03" exists ...
	I0127 12:31:52.569068  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.569106  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.583948  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37727
	I0127 12:31:52.584453  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.584948  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.584975  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.585351  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.585555  383443 main.go:141] libmachine: (ha-965156-m03) Calling .GetIP
	I0127 12:31:52.588270  383443 main.go:141] libmachine: (ha-965156-m03) DBG | domain ha-965156-m03 has defined MAC address 52:54:00:10:e9:7b in network mk-ha-965156
	I0127 12:31:52.588697  383443 main.go:141] libmachine: (ha-965156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:e9:7b", ip: ""} in network mk-ha-965156: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:50 +0000 UTC Type:0 Mac:52:54:00:10:e9:7b Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-965156-m03 Clientid:01:52:54:00:10:e9:7b}
	I0127 12:31:52.588729  383443 main.go:141] libmachine: (ha-965156-m03) DBG | domain ha-965156-m03 has defined IP address 192.168.39.18 and MAC address 52:54:00:10:e9:7b in network mk-ha-965156
	I0127 12:31:52.588847  383443 host.go:66] Checking if "ha-965156-m03" exists ...
	I0127 12:31:52.589275  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.589329  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.604843  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0127 12:31:52.605234  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.605677  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.605701  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.606035  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.606231  383443 main.go:141] libmachine: (ha-965156-m03) Calling .DriverName
	I0127 12:31:52.606428  383443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:31:52.606448  383443 main.go:141] libmachine: (ha-965156-m03) Calling .GetSSHHostname
	I0127 12:31:52.609171  383443 main.go:141] libmachine: (ha-965156-m03) DBG | domain ha-965156-m03 has defined MAC address 52:54:00:10:e9:7b in network mk-ha-965156
	I0127 12:31:52.609618  383443 main.go:141] libmachine: (ha-965156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:e9:7b", ip: ""} in network mk-ha-965156: {Iface:virbr1 ExpiryTime:2025-01-27 13:27:50 +0000 UTC Type:0 Mac:52:54:00:10:e9:7b Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-965156-m03 Clientid:01:52:54:00:10:e9:7b}
	I0127 12:31:52.609640  383443 main.go:141] libmachine: (ha-965156-m03) DBG | domain ha-965156-m03 has defined IP address 192.168.39.18 and MAC address 52:54:00:10:e9:7b in network mk-ha-965156
	I0127 12:31:52.609782  383443 main.go:141] libmachine: (ha-965156-m03) Calling .GetSSHPort
	I0127 12:31:52.609957  383443 main.go:141] libmachine: (ha-965156-m03) Calling .GetSSHKeyPath
	I0127 12:31:52.610074  383443 main.go:141] libmachine: (ha-965156-m03) Calling .GetSSHUsername
	I0127 12:31:52.610227  383443 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/ha-965156-m03/id_rsa Username:docker}
	I0127 12:31:52.695357  383443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:31:52.712974  383443 kubeconfig.go:125] found "ha-965156" server: "https://192.168.39.254:8443"
	I0127 12:31:52.713012  383443 api_server.go:166] Checking apiserver status ...
	I0127 12:31:52.713058  383443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:31:52.727924  383443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1501/cgroup
	W0127 12:31:52.737233  383443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1501/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:31:52.737283  383443 ssh_runner.go:195] Run: ls
	I0127 12:31:52.743701  383443 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 12:31:52.749100  383443 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 12:31:52.749130  383443 status.go:463] ha-965156-m03 apiserver status = Running (err=<nil>)
	I0127 12:31:52.749142  383443 status.go:176] ha-965156-m03 status: &{Name:ha-965156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:31:52.749166  383443 status.go:174] checking status of ha-965156-m04 ...
	I0127 12:31:52.749490  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.749535  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.765467  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0127 12:31:52.765958  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.766436  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.766457  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.766807  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.766982  383443 main.go:141] libmachine: (ha-965156-m04) Calling .GetState
	I0127 12:31:52.768353  383443 status.go:371] ha-965156-m04 host status = "Running" (err=<nil>)
	I0127 12:31:52.768369  383443 host.go:66] Checking if "ha-965156-m04" exists ...
	I0127 12:31:52.768647  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.768690  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.783953  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I0127 12:31:52.784414  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.784879  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.784903  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.785207  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.785378  383443 main.go:141] libmachine: (ha-965156-m04) Calling .GetIP
	I0127 12:31:52.788166  383443 main.go:141] libmachine: (ha-965156-m04) DBG | domain ha-965156-m04 has defined MAC address 52:54:00:e8:af:8c in network mk-ha-965156
	I0127 12:31:52.788581  383443 main.go:141] libmachine: (ha-965156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:af:8c", ip: ""} in network mk-ha-965156: {Iface:virbr1 ExpiryTime:2025-01-27 13:29:25 +0000 UTC Type:0 Mac:52:54:00:e8:af:8c Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-965156-m04 Clientid:01:52:54:00:e8:af:8c}
	I0127 12:31:52.788607  383443 main.go:141] libmachine: (ha-965156-m04) DBG | domain ha-965156-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:e8:af:8c in network mk-ha-965156
	I0127 12:31:52.788747  383443 host.go:66] Checking if "ha-965156-m04" exists ...
	I0127 12:31:52.789047  383443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:31:52.789097  383443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:31:52.803973  383443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I0127 12:31:52.804404  383443 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:31:52.804845  383443 main.go:141] libmachine: Using API Version  1
	I0127 12:31:52.804877  383443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:31:52.805164  383443 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:31:52.805361  383443 main.go:141] libmachine: (ha-965156-m04) Calling .DriverName
	I0127 12:31:52.805529  383443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:31:52.805550  383443 main.go:141] libmachine: (ha-965156-m04) Calling .GetSSHHostname
	I0127 12:31:52.808294  383443 main.go:141] libmachine: (ha-965156-m04) DBG | domain ha-965156-m04 has defined MAC address 52:54:00:e8:af:8c in network mk-ha-965156
	I0127 12:31:52.808691  383443 main.go:141] libmachine: (ha-965156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:af:8c", ip: ""} in network mk-ha-965156: {Iface:virbr1 ExpiryTime:2025-01-27 13:29:25 +0000 UTC Type:0 Mac:52:54:00:e8:af:8c Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-965156-m04 Clientid:01:52:54:00:e8:af:8c}
	I0127 12:31:52.808723  383443 main.go:141] libmachine: (ha-965156-m04) DBG | domain ha-965156-m04 has defined IP address 192.168.39.241 and MAC address 52:54:00:e8:af:8c in network mk-ha-965156
	I0127 12:31:52.808844  383443 main.go:141] libmachine: (ha-965156-m04) Calling .GetSSHPort
	I0127 12:31:52.809013  383443 main.go:141] libmachine: (ha-965156-m04) Calling .GetSSHKeyPath
	I0127 12:31:52.809163  383443 main.go:141] libmachine: (ha-965156-m04) Calling .GetSSHUsername
	I0127 12:31:52.809286  383443 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/ha-965156-m04/id_rsa Username:docker}
	I0127 12:31:52.895413  383443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:31:52.917422  383443 status.go:176] ha-965156-m04 status: &{Name:ha-965156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 node start m02 -v=7 --alsologtostderr
E0127 12:32:23.407895  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-965156 node start m02 -v=7 --alsologtostderr: (47.582659671s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (496.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-965156 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-965156 -v=7 --alsologtostderr
E0127 12:34:39.548069  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:35:07.249971  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:36:08.031403  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-965156 -v=7 --alsologtostderr: (4m34.171889978s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-965156 --wait=true -v=7 --alsologtostderr
E0127 12:39:39.547879  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-965156 --wait=true -v=7 --alsologtostderr: (3m41.860786392s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-965156
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (496.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 node delete m03 -v=7 --alsologtostderr
E0127 12:41:08.035036  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-965156 node delete m03 -v=7 --alsologtostderr: (17.504753792s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 stop -v=7 --alsologtostderr
E0127 12:42:31.095792  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:44:39.548049  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-965156 stop -v=7 --alsologtostderr: (4m32.581051344s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr: exit status 7 (111.458382ms)

                                                
                                                
-- stdout --
	ha-965156
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-965156-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-965156-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:45:50.632444  388301 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:45:50.632701  388301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:45:50.632710  388301 out.go:358] Setting ErrFile to fd 2...
	I0127 12:45:50.632714  388301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:45:50.632861  388301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 12:45:50.633025  388301 out.go:352] Setting JSON to false
	I0127 12:45:50.633057  388301 mustload.go:65] Loading cluster: ha-965156
	I0127 12:45:50.633122  388301 notify.go:220] Checking for updates...
	I0127 12:45:50.633493  388301 config.go:182] Loaded profile config "ha-965156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:45:50.633523  388301 status.go:174] checking status of ha-965156 ...
	I0127 12:45:50.634044  388301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:45:50.634095  388301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:45:50.657930  388301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0127 12:45:50.658355  388301 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:45:50.658893  388301 main.go:141] libmachine: Using API Version  1
	I0127 12:45:50.658908  388301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:45:50.659291  388301 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:45:50.659514  388301 main.go:141] libmachine: (ha-965156) Calling .GetState
	I0127 12:45:50.661109  388301 status.go:371] ha-965156 host status = "Stopped" (err=<nil>)
	I0127 12:45:50.661123  388301 status.go:384] host is not running, skipping remaining checks
	I0127 12:45:50.661129  388301 status.go:176] ha-965156 status: &{Name:ha-965156 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:45:50.661151  388301 status.go:174] checking status of ha-965156-m02 ...
	I0127 12:45:50.661425  388301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:45:50.661464  388301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:45:50.675691  388301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I0127 12:45:50.676060  388301 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:45:50.676536  388301 main.go:141] libmachine: Using API Version  1
	I0127 12:45:50.676555  388301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:45:50.676846  388301 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:45:50.677071  388301 main.go:141] libmachine: (ha-965156-m02) Calling .GetState
	I0127 12:45:50.678349  388301 status.go:371] ha-965156-m02 host status = "Stopped" (err=<nil>)
	I0127 12:45:50.678365  388301 status.go:384] host is not running, skipping remaining checks
	I0127 12:45:50.678372  388301 status.go:176] ha-965156-m02 status: &{Name:ha-965156-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:45:50.678393  388301 status.go:174] checking status of ha-965156-m04 ...
	I0127 12:45:50.678713  388301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:45:50.678756  388301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:45:50.692898  388301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
	I0127 12:45:50.693248  388301 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:45:50.693720  388301 main.go:141] libmachine: Using API Version  1
	I0127 12:45:50.693744  388301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:45:50.694020  388301 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:45:50.694219  388301 main.go:141] libmachine: (ha-965156-m04) Calling .GetState
	I0127 12:45:50.695646  388301 status.go:371] ha-965156-m04 host status = "Stopped" (err=<nil>)
	I0127 12:45:50.695660  388301 status.go:384] host is not running, skipping remaining checks
	I0127 12:45:50.695665  388301 status.go:176] ha-965156-m04 status: &{Name:ha-965156-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (116.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-965156 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 12:46:02.611521  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:46:08.032401  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-965156 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.72928297s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (116.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (87.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-965156 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-965156 --control-plane -v=7 --alsologtostderr: (1m26.665051098s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-965156 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (87.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-229871 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0127 12:49:39.548224  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-229871 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.24437219s)
--- PASS: TestJSONOutput/start/Command (82.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-229871 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-229871 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-229871 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-229871 --output=json --user=testUser: (7.382291017s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-147302 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-147302 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.386899ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"17b90cd1-2645-4d00-83a8-7e8e6d1701fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-147302] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc91465f-f35a-47d5-8d18-4037cccb18d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20317"}}
	{"specversion":"1.0","id":"8b8b31b4-04e3-46a4-bd20-d8776dfaa2b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f1361099-fea6-4448-a25f-4c3840e36841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig"}}
	{"specversion":"1.0","id":"287e6279-df3f-4125-9c4a-f5f4a3d4dd3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube"}}
	{"specversion":"1.0","id":"aba9cb6c-9925-455f-b87c-a33a10b81e35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d00dd09f-9a6b-4a62-a588-1d3f974ac4a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"465a66ae-ac61-4d72-be59-c171d7c4f44e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-147302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-147302
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-672324 --driver=kvm2  --container-runtime=crio
E0127 12:51:08.030969  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-672324 --driver=kvm2  --container-runtime=crio: (45.755001138s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-692719 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-692719 --driver=kvm2  --container-runtime=crio: (42.00764016s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-672324
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-692719
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-692719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-692719
helpers_test.go:175: Cleaning up "first-672324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-672324
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-672324: (1.009106815s)
--- PASS: TestMinikubeProfile (90.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-853699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-853699 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.155043434s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853699 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-853699 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-872270 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-872270 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.584761431s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872270 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872270 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-853699 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872270 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872270 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-872270
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-872270: (1.628043168s)
--- PASS: TestMountStart/serial/Stop (1.63s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-872270
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-872270: (23.729157775s)
--- PASS: TestMountStart/serial/RestartStopped (24.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872270 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-872270 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-923127 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0127 12:54:39.547317  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-923127 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.25799373s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-923127 -- rollout status deployment/busybox: (7.197072907s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-2ttvq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-pg9pw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-2ttvq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-pg9pw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-2ttvq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-pg9pw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-2ttvq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-2ttvq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-pg9pw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-923127 -- exec busybox-58667487b6-pg9pw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-923127 -v 3 --alsologtostderr
E0127 12:56:08.033689  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-923127 -v 3 --alsologtostderr: (52.681173381s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-923127 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp testdata/cp-test.txt multinode-923127:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122714977/001/cp-test_multinode-923127.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127:/home/docker/cp-test.txt multinode-923127-m02:/home/docker/cp-test_multinode-923127_multinode-923127-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m02 "sudo cat /home/docker/cp-test_multinode-923127_multinode-923127-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127:/home/docker/cp-test.txt multinode-923127-m03:/home/docker/cp-test_multinode-923127_multinode-923127-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m03 "sudo cat /home/docker/cp-test_multinode-923127_multinode-923127-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp testdata/cp-test.txt multinode-923127-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122714977/001/cp-test_multinode-923127-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127-m02:/home/docker/cp-test.txt multinode-923127:/home/docker/cp-test_multinode-923127-m02_multinode-923127.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127 "sudo cat /home/docker/cp-test_multinode-923127-m02_multinode-923127.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127-m02:/home/docker/cp-test.txt multinode-923127-m03:/home/docker/cp-test_multinode-923127-m02_multinode-923127-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m03 "sudo cat /home/docker/cp-test_multinode-923127-m02_multinode-923127-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp testdata/cp-test.txt multinode-923127-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122714977/001/cp-test_multinode-923127-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127-m03:/home/docker/cp-test.txt multinode-923127:/home/docker/cp-test_multinode-923127-m03_multinode-923127.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127 "sudo cat /home/docker/cp-test_multinode-923127-m03_multinode-923127.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 cp multinode-923127-m03:/home/docker/cp-test.txt multinode-923127-m02:/home/docker/cp-test_multinode-923127-m03_multinode-923127-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 ssh -n multinode-923127-m02 "sudo cat /home/docker/cp-test_multinode-923127-m03_multinode-923127-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-923127 node stop m03: (1.553328417s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-923127 status: exit status 7 (419.339743ms)

                                                
                                                
-- stdout --
	multinode-923127
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-923127-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-923127-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr: exit status 7 (423.651102ms)

                                                
                                                
-- stdout --
	multinode-923127
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-923127-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-923127-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:56:54.731313  396213 out.go:345] Setting OutFile to fd 1 ...
	I0127 12:56:54.731427  396213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:56:54.731436  396213 out.go:358] Setting ErrFile to fd 2...
	I0127 12:56:54.731441  396213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:56:54.731622  396213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 12:56:54.731818  396213 out.go:352] Setting JSON to false
	I0127 12:56:54.731857  396213 mustload.go:65] Loading cluster: multinode-923127
	I0127 12:56:54.731958  396213 notify.go:220] Checking for updates...
	I0127 12:56:54.732391  396213 config.go:182] Loaded profile config "multinode-923127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 12:56:54.732417  396213 status.go:174] checking status of multinode-923127 ...
	I0127 12:56:54.733021  396213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:56:54.733070  396213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:56:54.749348  396213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I0127 12:56:54.749774  396213 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:56:54.750365  396213 main.go:141] libmachine: Using API Version  1
	I0127 12:56:54.750394  396213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:56:54.750725  396213 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:56:54.750936  396213 main.go:141] libmachine: (multinode-923127) Calling .GetState
	I0127 12:56:54.752546  396213 status.go:371] multinode-923127 host status = "Running" (err=<nil>)
	I0127 12:56:54.752570  396213 host.go:66] Checking if "multinode-923127" exists ...
	I0127 12:56:54.752838  396213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:56:54.752875  396213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:56:54.767754  396213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0127 12:56:54.768191  396213 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:56:54.768638  396213 main.go:141] libmachine: Using API Version  1
	I0127 12:56:54.768660  396213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:56:54.768926  396213 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:56:54.769146  396213 main.go:141] libmachine: (multinode-923127) Calling .GetIP
	I0127 12:56:54.771558  396213 main.go:141] libmachine: (multinode-923127) DBG | domain multinode-923127 has defined MAC address 52:54:00:fd:ed:c6 in network mk-multinode-923127
	I0127 12:56:54.771920  396213 main.go:141] libmachine: (multinode-923127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:ed:c6", ip: ""} in network mk-multinode-923127: {Iface:virbr1 ExpiryTime:2025-01-27 13:54:04 +0000 UTC Type:0 Mac:52:54:00:fd:ed:c6 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-923127 Clientid:01:52:54:00:fd:ed:c6}
	I0127 12:56:54.771947  396213 main.go:141] libmachine: (multinode-923127) DBG | domain multinode-923127 has defined IP address 192.168.39.94 and MAC address 52:54:00:fd:ed:c6 in network mk-multinode-923127
	I0127 12:56:54.772053  396213 host.go:66] Checking if "multinode-923127" exists ...
	I0127 12:56:54.772426  396213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:56:54.772473  396213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:56:54.787459  396213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0127 12:56:54.787818  396213 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:56:54.788219  396213 main.go:141] libmachine: Using API Version  1
	I0127 12:56:54.788239  396213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:56:54.788541  396213 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:56:54.788762  396213 main.go:141] libmachine: (multinode-923127) Calling .DriverName
	I0127 12:56:54.788948  396213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:56:54.788971  396213 main.go:141] libmachine: (multinode-923127) Calling .GetSSHHostname
	I0127 12:56:54.791394  396213 main.go:141] libmachine: (multinode-923127) DBG | domain multinode-923127 has defined MAC address 52:54:00:fd:ed:c6 in network mk-multinode-923127
	I0127 12:56:54.791813  396213 main.go:141] libmachine: (multinode-923127) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:ed:c6", ip: ""} in network mk-multinode-923127: {Iface:virbr1 ExpiryTime:2025-01-27 13:54:04 +0000 UTC Type:0 Mac:52:54:00:fd:ed:c6 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-923127 Clientid:01:52:54:00:fd:ed:c6}
	I0127 12:56:54.791838  396213 main.go:141] libmachine: (multinode-923127) DBG | domain multinode-923127 has defined IP address 192.168.39.94 and MAC address 52:54:00:fd:ed:c6 in network mk-multinode-923127
	I0127 12:56:54.791994  396213 main.go:141] libmachine: (multinode-923127) Calling .GetSSHPort
	I0127 12:56:54.792170  396213 main.go:141] libmachine: (multinode-923127) Calling .GetSSHKeyPath
	I0127 12:56:54.792337  396213 main.go:141] libmachine: (multinode-923127) Calling .GetSSHUsername
	I0127 12:56:54.792500  396213 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/multinode-923127/id_rsa Username:docker}
	I0127 12:56:54.873704  396213 ssh_runner.go:195] Run: systemctl --version
	I0127 12:56:54.880934  396213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:56:54.895253  396213 kubeconfig.go:125] found "multinode-923127" server: "https://192.168.39.94:8443"
	I0127 12:56:54.895287  396213 api_server.go:166] Checking apiserver status ...
	I0127 12:56:54.895342  396213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:56:54.909773  396213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1038/cgroup
	W0127 12:56:54.919534  396213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1038/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:56:54.919587  396213 ssh_runner.go:195] Run: ls
	I0127 12:56:54.923667  396213 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0127 12:56:54.929717  396213 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0127 12:56:54.929736  396213 status.go:463] multinode-923127 apiserver status = Running (err=<nil>)
	I0127 12:56:54.929746  396213 status.go:176] multinode-923127 status: &{Name:multinode-923127 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:56:54.929764  396213 status.go:174] checking status of multinode-923127-m02 ...
	I0127 12:56:54.930097  396213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:56:54.930134  396213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:56:54.947348  396213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I0127 12:56:54.947795  396213 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:56:54.948233  396213 main.go:141] libmachine: Using API Version  1
	I0127 12:56:54.948250  396213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:56:54.948666  396213 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:56:54.948881  396213 main.go:141] libmachine: (multinode-923127-m02) Calling .GetState
	I0127 12:56:54.950499  396213 status.go:371] multinode-923127-m02 host status = "Running" (err=<nil>)
	I0127 12:56:54.950517  396213 host.go:66] Checking if "multinode-923127-m02" exists ...
	I0127 12:56:54.950958  396213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:56:54.951010  396213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:56:54.965961  396213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39755
	I0127 12:56:54.966393  396213 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:56:54.966867  396213 main.go:141] libmachine: Using API Version  1
	I0127 12:56:54.966888  396213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:56:54.967157  396213 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:56:54.967325  396213 main.go:141] libmachine: (multinode-923127-m02) Calling .GetIP
	I0127 12:56:54.969742  396213 main.go:141] libmachine: (multinode-923127-m02) DBG | domain multinode-923127-m02 has defined MAC address 52:54:00:82:0f:e0 in network mk-multinode-923127
	I0127 12:56:54.970106  396213 main.go:141] libmachine: (multinode-923127-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0f:e0", ip: ""} in network mk-multinode-923127: {Iface:virbr1 ExpiryTime:2025-01-27 13:55:05 +0000 UTC Type:0 Mac:52:54:00:82:0f:e0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-923127-m02 Clientid:01:52:54:00:82:0f:e0}
	I0127 12:56:54.970128  396213 main.go:141] libmachine: (multinode-923127-m02) DBG | domain multinode-923127-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:82:0f:e0 in network mk-multinode-923127
	I0127 12:56:54.970287  396213 host.go:66] Checking if "multinode-923127-m02" exists ...
	I0127 12:56:54.970584  396213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:56:54.970639  396213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:56:54.985044  396213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45957
	I0127 12:56:54.985360  396213 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:56:54.985777  396213 main.go:141] libmachine: Using API Version  1
	I0127 12:56:54.985794  396213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:56:54.986041  396213 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:56:54.986186  396213 main.go:141] libmachine: (multinode-923127-m02) Calling .DriverName
	I0127 12:56:54.986367  396213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:56:54.986389  396213 main.go:141] libmachine: (multinode-923127-m02) Calling .GetSSHHostname
	I0127 12:56:54.988922  396213 main.go:141] libmachine: (multinode-923127-m02) DBG | domain multinode-923127-m02 has defined MAC address 52:54:00:82:0f:e0 in network mk-multinode-923127
	I0127 12:56:54.989334  396213 main.go:141] libmachine: (multinode-923127-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:0f:e0", ip: ""} in network mk-multinode-923127: {Iface:virbr1 ExpiryTime:2025-01-27 13:55:05 +0000 UTC Type:0 Mac:52:54:00:82:0f:e0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-923127-m02 Clientid:01:52:54:00:82:0f:e0}
	I0127 12:56:54.989372  396213 main.go:141] libmachine: (multinode-923127-m02) DBG | domain multinode-923127-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:82:0f:e0 in network mk-multinode-923127
	I0127 12:56:54.989510  396213 main.go:141] libmachine: (multinode-923127-m02) Calling .GetSSHPort
	I0127 12:56:54.989721  396213 main.go:141] libmachine: (multinode-923127-m02) Calling .GetSSHKeyPath
	I0127 12:56:54.989870  396213 main.go:141] libmachine: (multinode-923127-m02) Calling .GetSSHUsername
	I0127 12:56:54.990000  396213 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20317-361578/.minikube/machines/multinode-923127-m02/id_rsa Username:docker}
	I0127 12:56:55.073126  396213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:56:55.086954  396213 status.go:176] multinode-923127-m02 status: &{Name:multinode-923127-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:56:55.086989  396213 status.go:174] checking status of multinode-923127-m03 ...
	I0127 12:56:55.087409  396213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 12:56:55.087467  396213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 12:56:55.103168  396213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I0127 12:56:55.103647  396213 main.go:141] libmachine: () Calling .GetVersion
	I0127 12:56:55.104158  396213 main.go:141] libmachine: Using API Version  1
	I0127 12:56:55.104184  396213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 12:56:55.104545  396213 main.go:141] libmachine: () Calling .GetMachineName
	I0127 12:56:55.104734  396213 main.go:141] libmachine: (multinode-923127-m03) Calling .GetState
	I0127 12:56:55.106294  396213 status.go:371] multinode-923127-m03 host status = "Stopped" (err=<nil>)
	I0127 12:56:55.106311  396213 status.go:384] host is not running, skipping remaining checks
	I0127 12:56:55.106318  396213 status.go:176] multinode-923127-m03 status: &{Name:multinode-923127-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-923127 node start m03 -v=7 --alsologtostderr: (40.50910538s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-923127
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-923127
E0127 12:59:11.097876  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 12:59:39.548367  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-923127: (3m3.173418518s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-923127 --wait=true -v=8 --alsologtostderr
E0127 13:01:08.031149  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:02:42.613523  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-923127 --wait=true -v=8 --alsologtostderr: (2m24.617000829s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-923127
--- PASS: TestMultiNode/serial/RestartKeepsNodes (327.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-923127 node delete m03: (2.233871642s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 stop
E0127 13:04:39.548160  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:06:08.038792  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-923127 stop: (3m1.683778513s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-923127 status: exit status 7 (86.111215ms)

                                                
                                                
-- stdout --
	multinode-923127
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-923127-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr: exit status 7 (84.269024ms)

                                                
                                                
-- stdout --
	multinode-923127
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-923127-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:06:08.715401  399155 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:06:08.715522  399155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:06:08.715531  399155 out.go:358] Setting ErrFile to fd 2...
	I0127 13:06:08.715535  399155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:06:08.715750  399155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:06:08.715912  399155 out.go:352] Setting JSON to false
	I0127 13:06:08.715939  399155 mustload.go:65] Loading cluster: multinode-923127
	I0127 13:06:08.715984  399155 notify.go:220] Checking for updates...
	I0127 13:06:08.716470  399155 config.go:182] Loaded profile config "multinode-923127": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0127 13:06:08.716499  399155 status.go:174] checking status of multinode-923127 ...
	I0127 13:06:08.717024  399155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:06:08.717074  399155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:06:08.732043  399155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0127 13:06:08.732422  399155 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:06:08.732951  399155 main.go:141] libmachine: Using API Version  1
	I0127 13:06:08.732976  399155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:06:08.733359  399155 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:06:08.733610  399155 main.go:141] libmachine: (multinode-923127) Calling .GetState
	I0127 13:06:08.735188  399155 status.go:371] multinode-923127 host status = "Stopped" (err=<nil>)
	I0127 13:06:08.735205  399155 status.go:384] host is not running, skipping remaining checks
	I0127 13:06:08.735212  399155 status.go:176] multinode-923127 status: &{Name:multinode-923127 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 13:06:08.735250  399155 status.go:174] checking status of multinode-923127-m02 ...
	I0127 13:06:08.735531  399155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0127 13:06:08.735562  399155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 13:06:08.750344  399155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46669
	I0127 13:06:08.750675  399155 main.go:141] libmachine: () Calling .GetVersion
	I0127 13:06:08.751111  399155 main.go:141] libmachine: Using API Version  1
	I0127 13:06:08.751135  399155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 13:06:08.751410  399155 main.go:141] libmachine: () Calling .GetMachineName
	I0127 13:06:08.751581  399155 main.go:141] libmachine: (multinode-923127-m02) Calling .GetState
	I0127 13:06:08.752897  399155 status.go:371] multinode-923127-m02 host status = "Stopped" (err=<nil>)
	I0127 13:06:08.752913  399155 status.go:384] host is not running, skipping remaining checks
	I0127 13:06:08.752920  399155 status.go:176] multinode-923127-m02 status: &{Name:multinode-923127-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (105.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-923127 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-923127 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m44.881652165s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-923127 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (105.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-923127
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-923127-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-923127-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.821313ms)

                                                
                                                
-- stdout --
	* [multinode-923127-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-923127-m02' is duplicated with machine name 'multinode-923127-m02' in profile 'multinode-923127'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-923127-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-923127-m03 --driver=kvm2  --container-runtime=crio: (42.518179712s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-923127
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-923127: exit status 80 (211.103219ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-923127 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-923127-m03 already exists in multinode-923127-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-923127-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.64s)

                                                
                                    
x
+
TestScheduledStopUnix (115.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-973528 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-973528 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.836572256s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973528 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-973528 -n scheduled-stop-973528
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973528 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 13:12:40.462669  368946 retry.go:31] will retry after 104.479µs: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.463855  368946 retry.go:31] will retry after 129.993µs: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.465006  368946 retry.go:31] will retry after 330.575µs: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.466159  368946 retry.go:31] will retry after 194.316µs: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.467293  368946 retry.go:31] will retry after 514.956µs: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.468427  368946 retry.go:31] will retry after 809.67µs: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.469563  368946 retry.go:31] will retry after 682.924µs: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.470701  368946 retry.go:31] will retry after 2.199925ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.473899  368946 retry.go:31] will retry after 1.466522ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.476145  368946 retry.go:31] will retry after 2.181952ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.479335  368946 retry.go:31] will retry after 4.59575ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.484537  368946 retry.go:31] will retry after 7.384781ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.492746  368946 retry.go:31] will retry after 9.726113ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.502969  368946 retry.go:31] will retry after 28.819192ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
I0127 13:12:40.532218  368946 retry.go:31] will retry after 27.64434ms: open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/scheduled-stop-973528/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973528 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-973528 -n scheduled-stop-973528
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-973528
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973528 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-973528
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-973528: exit status 7 (76.879551ms)

                                                
                                                
-- stdout --
	scheduled-stop-973528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-973528 -n scheduled-stop-973528
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-973528 -n scheduled-stop-973528: exit status 7 (65.891067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-973528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-973528
--- PASS: TestScheduledStopUnix (115.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3975403259 start -p running-upgrade-413928 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0127 13:14:39.548300  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3975403259 start -p running-upgrade-413928 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m10.366852058s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-413928 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0127 13:16:08.031418  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-413928 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.984325221s)
helpers_test.go:175: Cleaning up "running-upgrade-413928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-413928
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-413928: (1.311326666s)
--- PASS: TestRunningBinaryUpgrade (199.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-392035 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-392035 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.419576ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-392035] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-392035 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-392035 --driver=kvm2  --container-runtime=crio: (1m37.961138553s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-392035 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-392035 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0127 13:15:51.099540  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-392035 --no-kubernetes --driver=kvm2  --container-runtime=crio: (42.555726873s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-392035 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-392035 status -o json: exit status 2 (229.410823ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-392035","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-392035
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-392035 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-392035 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.289954072s)
--- PASS: TestNoKubernetes/serial/Start (29.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-211629 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-211629 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (131.768781ms)

                                                
                                                
-- stdout --
	* [false-211629] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20317
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:16:29.841257  405584 out.go:345] Setting OutFile to fd 1 ...
	I0127 13:16:29.841529  405584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:16:29.841540  405584 out.go:358] Setting ErrFile to fd 2...
	I0127 13:16:29.841544  405584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:16:29.841739  405584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20317-361578/.minikube/bin
	I0127 13:16:29.842353  405584 out.go:352] Setting JSON to false
	I0127 13:16:29.843403  405584 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":21530,"bootTime":1737962260,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 13:16:29.843489  405584 start.go:139] virtualization: kvm guest
	I0127 13:16:29.845463  405584 out.go:177] * [false-211629] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 13:16:29.847233  405584 notify.go:220] Checking for updates...
	I0127 13:16:29.847253  405584 out.go:177]   - MINIKUBE_LOCATION=20317
	I0127 13:16:29.848758  405584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 13:16:29.850128  405584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20317-361578/kubeconfig
	I0127 13:16:29.852797  405584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20317-361578/.minikube
	I0127 13:16:29.859226  405584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 13:16:29.860954  405584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 13:16:29.863163  405584 config.go:182] Loaded profile config "NoKubernetes-392035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0127 13:16:29.863330  405584 config.go:182] Loaded profile config "kubernetes-upgrade-511736": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0127 13:16:29.863458  405584 config.go:182] Loaded profile config "running-upgrade-413928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0127 13:16:29.863557  405584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 13:16:29.907690  405584 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 13:16:29.908856  405584 start.go:297] selected driver: kvm2
	I0127 13:16:29.908878  405584 start.go:901] validating driver "kvm2" against <nil>
	I0127 13:16:29.908892  405584 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 13:16:29.911063  405584 out.go:201] 
	W0127 13:16:29.912107  405584 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0127 13:16:29.913158  405584 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-211629 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-211629" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:16:28 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.12:8443
name: running-upgrade-413928
contexts:
- context:
cluster: running-upgrade-413928
user: running-upgrade-413928
name: running-upgrade-413928
current-context: running-upgrade-413928
kind: Config
preferences: {}
users:
- name: running-upgrade-413928
user:
client-certificate: /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/running-upgrade-413928/client.crt
client-key: /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/running-upgrade-413928/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-211629

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-211629"

                                                
                                                
----------------------- debugLogs end: false-211629 [took: 2.839592865s] --------------------------------
helpers_test.go:175: Cleaning up "false-211629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-211629
--- PASS: TestNetworkPlugins/group/false (3.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-392035 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-392035 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.323772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.315122607s)
--- PASS: TestNoKubernetes/serial/ProfileList (16.05s)

                                                
                                    
x
+
TestPause/serial/Start (53.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-715621 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-715621 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (53.023410578s)
--- PASS: TestPause/serial/Start (53.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-392035
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-392035: (1.283352736s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-392035 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-392035 --driver=kvm2  --container-runtime=crio: (30.784341847s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-392035 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-392035 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.693935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (132.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1583883785 start -p stopped-upgrade-619602 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1583883785 start -p stopped-upgrade-619602 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.402942345s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1583883785 -p stopped-upgrade-619602 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1583883785 -p stopped-upgrade-619602 stop: (2.158513677s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-619602 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-619602 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.181431598s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (132.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m30.715241566s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m33.640693456s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-619602
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-619602: (1.156404165s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (103.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0127 13:21:08.030990  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m43.344919056s)
--- PASS: TestNetworkPlugins/group/calico/Start (103.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-211629 "pgrep -a kubelet"
I0127 13:21:40.192565  368946 config.go:182] Loaded profile config "auto-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-211629 replace --force -f testdata/netcat-deployment.yaml
I0127 13:21:41.082379  368946 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0127 13:21:41.085567  368946 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hfww9" [cfe47bc3-0e17-4704-b7a2-a8dd40b42f33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hfww9" [cfe47bc3-0e17-4704-b7a2-a8dd40b42f33] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003198971s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-211629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.569290906s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vdvtp" [6d53a2c3-76c6-4452-af45-3e0526bf826d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006960785s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-211629 "pgrep -a kubelet"
I0127 13:22:20.641048  368946 config.go:182] Loaded profile config "kindnet-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-211629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zhdx2" [b2d99bb7-0035-41a1-a6f3-9a3b0b5491a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zhdx2" [b2d99bb7-0035-41a1-a6f3-9a3b0b5491a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004311874s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m11.240014956s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-211629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-flnkf" [c996928f-848b-4f8d-a12d-3463a9471de5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006776641s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-211629 "pgrep -a kubelet"
I0127 13:22:45.495193  368946 config.go:182] Loaded profile config "calico-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-211629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cnjht" [d7bb7faf-d5e4-419c-9516-033ebfc7b33f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cnjht" [d7bb7faf-d5e4-419c-9516-033ebfc7b33f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003051012s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.186849996s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-211629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-211629 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m13.048636328s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-211629 "pgrep -a kubelet"
I0127 13:23:30.024623  368946 config.go:182] Loaded profile config "custom-flannel-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-211629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rp4mm" [2c17ed0b-faad-4a33-853c-ed7fcfe75858] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-rp4mm" [2c17ed0b-faad-4a33-853c-ed7fcfe75858] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.00369034s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-211629 "pgrep -a kubelet"
I0127 13:23:33.808825  368946 config.go:182] Loaded profile config "enable-default-cni-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-211629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-211629 replace --force -f testdata/netcat-deployment.yaml: (1.141726732s)
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cmk8m" [2bc8fa95-ca7f-41a4-b113-24085aa9bb80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cmk8m" [2bc8fa95-ca7f-41a4-b113-24085aa9bb80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004589738s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-211629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-211629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (95.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-563155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-563155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m35.992196844s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (95.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-swxp6" [dc505092-60c2-4303-bf04-51f7614cdf34] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004874105s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-211629 "pgrep -a kubelet"
I0127 13:24:26.183772  368946 config.go:182] Loaded profile config "flannel-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-211629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5xbf8" [6113d95e-0fe1-4e5f-a532-c06a56e8398b] Pending
helpers_test.go:344: "netcat-5d86dc444-5xbf8" [6113d95e-0fe1-4e5f-a532-c06a56e8398b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004114761s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-211629 "pgrep -a kubelet"
I0127 13:24:30.602239  368946 config.go:182] Loaded profile config "bridge-211629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-211629 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bhnh9" [b826afed-8d9e-4001-8336-ce8818dec92c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bhnh9" [b826afed-8d9e-4001-8336-ce8818dec92c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.004870024s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-211629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-211629 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-211629 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0127 13:36:02.617050  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:08.032824  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:24.790896  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:24.797286  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:24.808615  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:24.829975  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:24.871389  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:24.952836  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:25.114353  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:25.436073  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:26.078124  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:27.359511  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:29.921798  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:35.043434  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:41.050900  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:36:45.285359  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:37:05.766728  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:37:14.405416  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:37:39.249346  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/calico-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:37:46.728250  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/default-k8s-diff-port-441438/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:38:30.267892  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:38:34.952685  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-174381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-174381 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m36.90363833s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-441438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-441438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m21.243190785s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-563155 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a551232-b4a1-4564-9b28-4a1973a49767] Pending
helpers_test.go:344: "busybox" [5a551232-b4a1-4564-9b28-4a1973a49767] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5a551232-b4a1-4564-9b28-4a1973a49767] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.004610601s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-563155 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-563155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-563155 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.042978145s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-563155 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-563155 --alsologtostderr -v=3
E0127 13:26:08.031579  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/addons-645690/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-563155 --alsologtostderr -v=3: (1m31.082142351s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-441438 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2f14cacf-6654-4140-98b1-ca95fecf1d55] Pending
helpers_test.go:344: "busybox" [2f14cacf-6654-4140-98b1-ca95fecf1d55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2f14cacf-6654-4140-98b1-ca95fecf1d55] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.004775359s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-441438 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-174381 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [482ca05a-2e53-4c6e-90f8-46b7ac50a745] Pending
helpers_test.go:344: "busybox" [482ca05a-2e53-4c6e-90f8-46b7ac50a745] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [482ca05a-2e53-4c6e-90f8-46b7ac50a745] Running
E0127 13:26:41.051864  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:41.058252  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:41.069591  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:41.090907  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:41.132198  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:41.213607  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:41.375904  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:41.697604  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:42.339221  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:43.620686  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.004141856s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-174381 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-441438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-441438 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-441438 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-441438 --alsologtostderr -v=3: (1m31.189901713s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-174381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-174381 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-174381 --alsologtostderr -v=3
E0127 13:26:46.182020  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:26:51.303467  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:01.545099  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:14.404276  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:14.410613  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:14.421907  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:14.443213  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:14.484563  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:14.565961  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:14.727518  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:15.049748  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:15.691857  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:16.973431  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:19.535436  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:22.026641  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/auto-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:27:24.656975  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/kindnet-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-174381 --alsologtostderr -v=3: (1m31.050791326s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-563155 -n no-preload-563155
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-563155 -n no-preload-563155: exit status 7 (66.671279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-563155 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438: exit status 7 (73.964084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-441438 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (326.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-441438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-441438 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m26.629509103s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (326.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174381 -n embed-certs-174381
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-174381 -n embed-certs-174381: exit status 7 (73.302995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-174381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-838260 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-838260 --alsologtostderr -v=3: (5.288365951s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-838260 -n old-k8s-version-838260: exit status 7 (69.852775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-838260 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-54tkj" [51668fc0-46d6-49b8-a08f-77cfd03f597a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004886657s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-54tkj" [51668fc0-46d6-49b8-a08f-77cfd03f597a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003968012s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-441438 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-441438 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-441438 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438: exit status 2 (260.846373ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438: exit status 2 (254.256445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-441438 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-441438 -n default-k8s-diff-port-441438
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-639843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 13:33:57.971785  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/custom-flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:34:02.658124  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/enable-default-cni-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:34:19.966261  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:34:31.286657  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
E0127 13:34:39.548271  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/functional-223147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-639843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (49.084917389s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-639843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-639843 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.155534188s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-639843 --alsologtostderr -v=3
E0127 13:34:47.668202  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/flannel-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-639843 --alsologtostderr -v=3: (7.348268656s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639843 -n newest-cni-639843
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639843 -n newest-cni-639843: exit status 7 (79.900729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-639843 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-639843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0127 13:34:58.989310  368946 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/bridge-211629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-639843 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (39.85247754s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-639843 -n newest-cni-639843
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-639843 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-639843 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639843 -n newest-cni-639843
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639843 -n newest-cni-639843: exit status 2 (239.451525ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-639843 -n newest-cni-639843
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-639843 -n newest-cni-639843: exit status 2 (229.4179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-639843 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-639843 -n newest-cni-639843
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-639843 -n newest-cni-639843
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.31s)

                                                
                                    

Test skip (39/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 3.6
267 TestNetworkPlugins/group/cilium 3.2
275 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-645690 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-211629 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-211629" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-211629

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-211629"

                                                
                                                
----------------------- debugLogs end: kubenet-211629 [took: 3.420683773s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-211629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-211629
--- SKIP: TestNetworkPlugins/group/kubenet (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-211629 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-211629" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20317-361578/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 13:16:28 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.12:8443
name: running-upgrade-413928
contexts:
- context:
cluster: running-upgrade-413928
user: running-upgrade-413928
name: running-upgrade-413928
current-context: running-upgrade-413928
kind: Config
preferences: {}
users:
- name: running-upgrade-413928
user:
client-certificate: /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/running-upgrade-413928/client.crt
client-key: /home/jenkins/minikube-integration/20317-361578/.minikube/profiles/running-upgrade-413928/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-211629

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-211629" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-211629"

                                                
                                                
----------------------- debugLogs end: cilium-211629 [took: 3.05727232s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-211629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-211629
--- SKIP: TestNetworkPlugins/group/cilium (3.20s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-118673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-118673
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard